Preprint
Article

This version is not peer-reviewed.

Gamifying Engagement in Spatial Crowdsourcing: An Exploratory Mixed-Methods Study on Gamification Impact among University Students

A peer-reviewed article of this preprint also exists.

Submitted:

19 May 2025

Posted:

20 May 2025

Read the latest preprint version here

Abstract
Citizen Science increasingly leverages digital platforms to mobilize broad public participation in environmental data collection. However, most initiatives struggle with declining engagement and sustained motivation. This study investigates the effects of gamification—specifically, points, daily-streak bonuses, and real-time leaderboards—on university students’ engagement, accomplishment, and immersion during a five-day, campus-wide Citizen Science intervention using the GREENCROWD platform. Employing a convergent mixed-methods design, we combined behavioural log analysis, validated psychometric scales (GAMEFULQUEST), and post-experiment interviews to triangulate both quantitative and qualitative dimensions of engagement. Results reveal that gamified elements significantly increased participants’ sense of accomplishment and initial motivation, which is reflected in higher average scores for goal-directed engagement and recurring qualitative themes related to competence and recognition. However, deeper immersion and sustained “flow” were less robust with repetitive task design. While the intervention achieved only moderate long-term participation rates, it demonstrates that thoughtfully implemented game mechanics can meaningfully enhance engagement without undermining data quality. These findings offer actionable guidance for designing more adaptive, motivating, and inclusive Citizen Science solutions and underscore the importance of mixed-methods evaluation in understanding complex engagement processes.
Keywords: 
;  ;  ;  ;  

1. Introduction

This study addresses how different gamification strategies affect citizen engagement and data quality in spatial crowdsourcing, offering timely insights for enhancing participation in initiatives like GREENGAGE [1] . Citizen Science is reshaping the landscape of data-driven research by enabling systematic participation of non-professionals in the scientific process. This paradigm shift—fuelled by the proliferation of mobile technologies, low-cost sensors, and digital platforms—has significantly expanded the scale, resolution, and accessibility of ecological and environmental data [2,3]. In urban contexts, where environmental and infrastructural dynamics are complex and rapidly evolving, Citizen Science initiatives have proven effective in generating fine-grained, spatially distributed information, critical for monitoring public health, biodiversity, and sustainability challenges [4,5]. Beyond data collection, such initiatives embody a democratization of science, fostering co-production of knowledge, civic empowerment, and more inclusive environmental governance frameworks [6,7]. Moreover, citizen-generated data are increasingly being recognized as valuable inputs for policy design, Sustainable Development Goals (SDG) monitoring, and global reporting mechanisms—provided that appropriate mechanisms for quality assurance, interoperability, and ethical stewardship are in place [4,8]. Taken together, these developments position Citizen Science as a methodological innovation and a critical infrastructural pillar for sustainability science in the current days.
Despite the rapid proliferation of mobile apps, low-cost sensors, and online platforms, most citizen-science programmes remain vulnerable to a well-documented “engagement-quality” spiral, especially as the complexity of Citizen Science protocols increases [9]. Across domains such as biological invasions [10], urban biodiversity, plastic-pollution monitoring, and hydrological observations, multi-year evaluations consistently reveal that, after the initial excitement fades, participation drops sharply — whether in biological invasions [11], urban biodiversity campaigns curtailed by the pandemic [12], plastic-pollution monitoring networks [13]or hydrological observatories [14]. Participation, therefore, becomes dominated by a narrow nucleus of enthusiasts, leaving broad geographic and sociodemographic gaps and amplifying sampling bias [15,16]. Attrition also undermines data fidelity: declining motivation correlates with misidentifications, protocol drift, and “careless responding” artefacts that can inflate error rates by 10–15 % in unattended surveys [17,18]. The result is an uneasy trade-off between coverage and credibility, especially when task complexity is high, feedback is scant, or automation displaces perceived volunteer agency [19,20]. Mitigating this compound deficit in sustained motivation and methodological rigour is thus indispensable if citizen-generated evidence is to fulfil its scientific and policy promise.
Recent research has underscored that gamification in citizen science is most effective not when it merely attracts participants through superficial rewards, but when it strengthens their intrinsic motivations and reinforces a sense of meaningful contribution. For instance, studies of platforms such as Foldit and Eyewire consistently reveal that participants are primarily driven by the opportunity to contribute to real science, rather than by game elements themselves [21,22,23]. Nevertheless, those same game elements—such as points, rankings, and collaborative play—play a pivotal role in sustaining engagement over time by nurturing intellectual challenge, peer learning, and a sense of community identity [21,24,25]. Curtis [21] Motivation to Participate in an Online Citizen Science Game: A Study of Foldi and Tinati et al. [21] emphasize that perceived progress, recognition, and interaction with both scientists and peers are critical for long-term commitment. In contrast, recent reviews highlight that poor design, unclear communication, and shallow gamification can rapidly erode motivation [26,27]. Taken together, these insights signal that successful gamified systems must align gameplay with scientific purpose, offer adaptive motivational scaffolding, and account for diverse user profiles to avoid narrowing participation to technically skilled or intrinsically motivated individuals.
A gameful participation approach can greatly enhance motivation. Evidence accumulates that motivational scaffolds grounded in game design can curb attrition while scaffolding data-quality controls. Gamified point systems, badges, and territorial “conquest” mechanics have raised completion rates in voluntary geographic-information tasks and increased spatial coverage without compromising positional accuracy [28]. Yet the empirical base remains thin: rigorous, in situ evaluations of game elements in location-dependent Citizen Science are scarce, short-lived, and rarely track objective error metrics over time [29]. Moreover, most projects still confine volunteers to a “contributory” role, withholding analytic feedback and thereby muting the very sense of competence and social recognition that sustains engagement [30]. Reviews of smart-city apps echo this gap, noting that incentives are either absent or poorly aligned with participant profiles, resulting in rapid post-launch drop-offs [31]. For example, in the GREENGAGE project [32], the concept of a reward-based system, ‘Social Coin,’ did not gain significant interest from pilot owners during the co-creation and co-design stages of the citizen science pilots, which may have influenced its uptake during subsequent campaign phases. This highlights the importance of aligning incentive mechanisms with stakeholders’ values and expectations early in the design process to ensure meaningful engagement and sustained participation in GREENGAGE campaigns. Addressing these gaps through theory-informed, longitudinal trials of adaptive gamification constitutes the next critical step toward resilient, high-integrity citizen-science infrastructures.
A notable approach in this direction is that of Puerta-Beldarrain et al. who developed the Volunteer Task Allocation Engine (VTAE), a system that emphasizes user experience and equitable spatial distribution of tasks in the context of altruistic participation [33]. Drawing on a growing body of empirical work demonstrating that game mechanics can foster sustained, higher-quality contributions in volunteered geographic information and other participatory-sensing contexts [28,29], the present study evaluates how point rewards, daily-streak bonuses and real-time leaderboards embedded in the GREENCROWD platform influence university students self-reported engagement—operationalised through perceived accomplishment and immersive flow—while they gather geolocated environmental observations. University students constitute a strategically important cohort: they are digitally proficient yet chronically time-constrained, and their future professional trajectories position them to shape urban-sustainability agendas. Accordingly, our investigation seeks design principles that amplify motivation without imposing inequitable or extractive workloads, thereby answering recent ethical critiques of “dark” citizen-science models that covertly harvest unpaid labour [34] and complement calls to widen participation beyond highly specialised hobbyists [35,36]. Against this backdrop, we ask whether gamification improves the volume and the subjective enjoyment and informational value of contributions—insights essential for next-generation citizen observatories tasked with balancing engagement, data quality, and civic legitimacy at scale [37,38,39,40].
This study, therefore, makes three interrelated contributions. First, it is one of the earliest mixed-methods examinations of how discrete game mechanics operate in situ within a location-based, urban citizen-science setting populated by university students. This mixed methods approach combines quantitative data (such as structured surveys, cumulative points) and qualitative data (such as interviews) to provide a more complete and deeper understanding of the impact of gamification on participants’ experience. This audience remains under-represented in the gamification literature outside of classroom contexts [28,29]. Second, by triangulating validated self-report scales with post-hoc interviews, we move beyond coarse engagement metrics (e.g., total submissions) and reconstruct the underlying experiential texture of accomplishment and flow, yielding a richer account of motivational processes than survey-only or log-file studies can provide. Third, the empirical insights translate into actionable design levers—optimal point weighting, streak calibration, and socially salient but ethically balanced leaderboards—that platform developers and municipal “smart-city” teams can deploy to sustain participation while safeguarding data fidelity. In doing so, the work extends current debates on citizen observatories from proof-of-concept prototypes toward scalable, evidence-based frameworks for participatory urban sensing and policy co-production.
Three objectives guide the work: (i) to quantify the extent to which discrete game mechanics—points, daily-streak bonuses and real-time leaderboards—elevate university volunteers’ self-reported engagement, accomplishment and immersion during location-based data collection; (ii) to uncover the motivational and behavioural pathways through which those mechanics operate; and (iii) to delineate the contextual drivers and constraints that determine whether gamification can sustain contribution volumes while safeguarding data quality in urban citizen-science infrastructures. Consistent with these aims, we advance the following testable statement:
Hypothesis. 
Introducing game elements in a location-based citizen-science platform will produce statistically significant gains in participants’ engagement, perceived accomplishment, and immersive “flow” relative to expectations for non-gamified citizen-science activities.
To prove the mechanisms and boundary conditions underlying this hypothesis, we address three interrelated research questions:
RQ1 How do university students experience engagement, accomplishment, and immersion while participating in a gamified, campus-wide citizen-science experiment?
RQ2 What motivational and behavioural patterns arise from using points, streaks, and leaderboards during geospatial data collection?
RQ3 Which qualitative factors—such as perceived value, social drivers or logistical barriers—influence the participation endurance and the perceived credibility of the data produced?
In doing so, this study contributes to the broader discourse on digital solutions for participatory governance in smart cities by demonstrating how gamified platforms can foster active citizen involvement in data-driven urban interventions. By embedding motivational game elements within a location-based system, we explore how digital tools can enhance civic engagement, improve environmental data quality, and support more inclusive and responsive forms of urban governance.
To address the research questions outlined above, the remainder of this paper is structured as follows: Section 2 presents the related work, providing context and background for this study. Section 3 outlines the research methodology, including a detailed description of the research design and the GREENCROWD engagement platform. Section 4 presents the results, followed by a detailed discussion in Section 5. Finally, the conclusions and future directions are discussed in Section 6.

2. Related Work

As this research spans multiple dimensions, including citizen science, rewards, gamification, and participatory platforms, the related work is organized accordingly to provide a comprehensive background.

2.1. Citizen Science and Engagement

Over the last decade, Citizen Science has matured from ad-hoc volunteer observation to a recognised research infrastructure, particularly in environmental and urban studies. Large-scale syntheses show that citizen contributions now underpin high-resolution biodiversity, air-quality, and climate datasets that would be prohibitively costly to obtain otherwise [2]. At the same time, longitudinal audits reveal a persistent “long-tail” pattern: fewer than 10 % of registrants sustain activity beyond the first month, leaving spatial and socio-demographic gaps in coverage [41]. Data-quality meta-analyses indicate that volunteer observations can match expert benchmarks with robust protocols, training, and post-hoc validation [7]. Yet, misidentifications and protocol drift rise sharply once motivation fades [17]. Consequently, contemporary work frames engagement as a multidimensional construct—cognitive, affective, behavioural, and social — that must be actively engineered throughout the project life-cycle [42].

2.2. Gamification in Non-Game Contexts

Gamification—the deliberate integration of game mechanics into non-game systems—has become a mainstream design strategy in information and learning technologies. A review of 819 Web of Science (WoS) indexed research papers confirms that points, badges and leaderboards (PBL) dominate practice, with education, health and crowdsourcing as the primary application areas [43]. Meta-analytic using the Hedges’ g coefficient [44] evidence shows small-to-moderate positive effects on cognitive (g≈0.49), motivational (g≈0.36) and behavioural (g≈0.25) outcomes in formal learning [45]. Effect heterogeneity is explained by design nuance: narrative context, balanced competition–co-operation, and personalised feedback consistently amplify impact, whereas “points-only” implementations risk novelty decay and user fatigue. These findings highlight the need for theory-led, user-centred gamification rather than bolt-on reward schemes.

2.3. Gamification in Citizen Science

While the integration of gamification into Citizen Science platforms is frequently promoted as a strategy to enhance engagement and data quality, empirical evidence remains limited and nuanced. Controlled field studies have shown that the deployment of game mechanics, such as progress points, leaderboards, and spatial “quests”—can increase short-term participation rates and expand the spatial and thematic coverage of contributions [28]. However, the literature consistently highlights that such effects are often transient: participant motivation typically wanes after the initial novelty effect, leading to declines in retention and the emergence of a core group of highly engaged contributors, while the majority contribute sporadically or disengage entirely [2,16,18].
Crucially, the impact of gamification on data quality is ambivalent. On one hand, competition and feedback mechanisms can improve accuracy and learning among motivated participants [28,46]. On the other hand, evidence points to risks of “quantity-over-quality” behaviours, careless or opportunistic submissions, and protocol violations, particularly when incentives are not carefully aligned with project goals or when feedback and recognition mechanisms are absent [17,34,47].
Moreover, gamified Citizen Science initiatives often default to a contributory model, where participants are restricted to data collection roles and rarely engage in co-design or interpretation of the collected data, i.e., taking part in the different stages of the Citizen Science loop. This limitation reduces the sense of ownership, long-term motivation, and ultimately, the sustainability and inclusivity of the projects [30,31,35]. Additionally, digital game mechanics may disproportionately attract technologically literate individuals, inadvertently exacerbating demographic biases and limiting broader community involvement [30,31].
In sum, while gamification offers a valuable toolkit for fostering participation and learning in Citizen Science, significant challenges remain regarding the long-term retention of volunteers once novelty fades, the design of equitable and inclusive game layers, and the implementation of robust mechanisms to safeguard data quality against “gaming the system.” Addressing these gaps will be critical for advancing the scientific and societal impact of gamified Citizen Science platforms.

2.4. Theoretical Foundations

Recent research underscores that engagement and sustained participation in Citizen Science are driven by a complex interplay of motivational factors, which include but are not limited to intrinsic interest, perceived impact, and social or educational benefits [30,48], where the intrinsic motivation is conducted by personal motivations or even to feel a sense of accomplishment. In contrast, the extrinsic is given by a reward, the result of an action done, approval, or to avoid disapproval. While explicit adoption of formal motivational theories such as Self-Determination Theory (SDT) is still limited in the environmental Citizen Science literature, empirical findings consistently reveal that autonomy, perceived competence, and social connectedness are critical for fostering long-term motivation and high-quality contributions [35,46].
Game elements in Citizen Science—such as optional tasks, tiered challenges, personalized feedback, and collaborative activities—are frequently aligned with these motivational drivers. Studies show that autonomy-supportive and competence-building activities and mechanisms for recognition and social interaction are associated with higher participant retention and improved data quality [2,28,48].
Furthermore, systematic reviews highlight the importance of triangulating quantitative and qualitative methods—for example, combining digital logs of participant activity, structured surveys, and in-depth interviews—to disentangle initial novelty effects from deeper, more persistent forms of engagement [2,17,46]. This mixed-methods approach is increasingly regarded as essential for evaluating both the effectiveness of game-based interventions and the mechanisms that underlie sustained motivation in Citizen Science contexts.
Nevertheless, a recurring gap in the literature is the lack of validated instruments specifically tailored to measure “gamefulness” and the quality of engagement in real-world Citizen Science. As projects scale and diversify, developing robust, context-sensitive measures of participant experience—including enjoyment, immersion, sense of impact, and creative contribution—remains a critical area for future research [30,31].

3. Methodology

3.1. Study Design

We ran a five-day field experiment to explore how gamification works in spatial crowdsourcing activities in Citizen Science. We used both quantitative and qualitative methods at the same time because (i) there is limited existing research on this topic, and most of it doesn’t track changes over time, and (ii) our participant group was relatively small of volunteers—too small for strong statistical analysis, but enough to gain deep insights into their experiences. While short in duration, the five-day timeframe was sufficient to observe meaningful fluctuations in motivation and participation, especially as tasks were assigned daily and required on-site activity, thereby simulating key aspects of sustained engagement within a compressed timeframe. By combining activity logs, surveys, and interviews, we were able to understand how much, how, and why participant engagement evolved during the study.

3.1.1. Participants and Setting

Participants were recruited through the university of Deusto in two different classes.
  • Registered users. A total of 49 users created a GREENCROWD account, however just 40 participants completed a baseline socio-demographic survey (age bracket, gender identity, study major, employment status, digital-skills self-rating, perceived disadvantage, residence postcode). Most participants (38 out of 49; 77.55%) belonged to the 18–24 years age group, classified as young adults or university-age. Only two participants fell outside this group—one in the 25–34 range (early adulthood) and one in the 45–54 range (late adulthood), each representing 2.04% of the sample. Nine users (18.37%) did not declare their age range. The interquartile range (IQR = 20–24)—which represents the middle 50% of the participants’ age distribution—confirms that most respondents were in their early twenties, aligning with the university student demographic (Table 1). In terms of gender identity, 30 participants identified as male (75%) and 10 as female (25%), indicating a gender imbalance in the sample. Importantly, no significant differences were observed across groups regarding study major, employment status, digital-skills self-rating, perceived disadvantage, or residential location, supporting the demographic homogeneity of the analytical sample.
  • Active contributors. 7 students (15% of registrants) submitted at least one complete task set during the study window; this subgroup constitutes the analytical sample for behavioural metrics.
  • Ethics. The protocol, depicted in Figure 1, was reviewed and approved by the Ethical Assessment Committee of the University of Deusto (Ref. ETK-61/24-25). In-app onboarding provided a study information sheet; participants gave explicit, GDPR-compliant e-consent and could withdraw at any point without penalty.

3.2. Intervention: The Gamified Experiment

The intervention was implemented over five consecutive days (Monday to Friday) on a university campus using the GREENCROWD digital platform. Initially, a brief in-person meeting was held to present the experiment, recruit participants, and address questions. All subsequent activities, including daily tasks, reminders, data collection, and post-experiment interviews, were conducted remotely.

3.2.1. Daily Task Structure

Each day at 9:00 AM, three unique Points of Interest (POIs) were activated within each of the two selected campus areas. This rotation ensured environmental diversity and minimized repetitive behaviour. Participants were notified by email at the same hour, serving as a task reminder and motivational tool. The emails included each participant’s current position on the public leaderboard and a tailored motivational message, encouraging those in leading positions to maintain performance, and inviting inactive users to re-engage and earn more points.

3.2.2. Micro-task Components

For each POI, participants were required to:
  • Complete a site survey (environmental rating, issue identification, and site usage frequency).
  • Upload a geo-tagged photograph reflecting current site conditions.
  • Provide suggestions for site improvement and indicate willingness to participate in future student-led initiatives.

3.2.3. GREENCROWD

GREENCROWD is an open-source digital platform specifically designed to facilitate spatially distributed Citizen Science activities in urban contexts [49]. It supports the structured collection of geolocated environmental data (e.g., site conditions, public space usage, and perceived issues) encouraged by an open - source gamification engine called GAME (Goals And Motivation Engine) [50]. The platform serves as a tool for both data acquisition and civic engagement, targeting university students, local communities, and research practitioners seeking to mobilize non-professional contributors in data-driven sustainability initiatives. GREENCROWD addresses a core challenge in Citizen Science—namely, the maintenance of sustained participation over time—by embedding lightweight gamification layers (e.g., points, streaks, leaderboards) without compromising the privacy or agency of users. It is particularly suited to campus- or district-scale deployments requiring fine-grained spatial resolution and structured task.
As an open-source tool, GREENCROWD is licensed under a permissive model that encourages community-driven development and code transparency. Researchers and developers are free to audit, modify, or extend the platform, fostering long-term sustainability and collaborative innovation. Importantly, the system is explicitly designed to protect user privacy: it does not collect any personal information such as names or email addresses. Instead, user identity and attribute management are fully delegated to a Keycloak Identity and Access Management (IAM), which authenticates users and issues pseudonymous tokens that contain only the necessary claims (e.g., age range, gender identity, language preference). The GREENCROWD backend processes these tokens without storing or accessing any direct user metadata, thereby ensuring GDPR compliance and minimizing ethical risks in experimental contexts.
The application interface is composed of the following main components:
  • Interactive Map with POIs and Dynamic Scoring: After login, the users can join to any available campaign in the system, then the user views a map overlaid with daily POIs, each associated with specific environmental observation tasks and points values (this depends on the gamification group that users belong to). These values are dynamically updated based on time, frequency, or contextual rules defined in the campaign logic (Figure 2)
  • Modular Task Workflow: In this experiment the tasks were structured in a three-step format: (i) environmental perception ratings, (ii) geotagged photo uploads, and (iii) suggestion prompts and willingness-to-engage indicators. This modularity simplifies user experience and improves data completeness (Figure 3).
  • Points, Leaderboard and Feedback Layer: After submitting a response, participants are shown the points earned for the completed task (Figure 4). Participants can monitor their own cumulative points and relative position on a public leaderboard. While users can see their own alias and ranking, other entries appear anonymized (e.g., “***123”), preserving participant confidentiality while still leveraging social comparison as a motivational driver (Figure 5).
  • Device-Agnostic and Responsive Design: GREENCROWD is optimized for mobile devices, supporting real-time geolocation, camera integration, and responsive layouts, thereby reducing barriers to participation in field-based conditions.
Collectively, these features establish GREENCROWD as a technically robust and ethically sound platform for participatory sensing. Its combination of open-source accessibility, privacy-by-design principles, and user-centred gamification enables researchers to deploy scientifically credible interventions while preserving the autonomy and trust of contributors.

3.2.4. Gamification Layer

Participants were randomly assigned to one of three gamification groups - Random, Static, or Adaptive - each implementing a distinct point-calculation strategy based on task completion.
  • Random Group: Participants received a score generated by a stochastic function. If no previous point history existed, a random integer between 0 and 10 was assigned. If prior scores were available, a new score was randomly drawn from the range between the minimum and maximum historical values of previously assigned points. This approach simulates unpredictable reward schedules often found in game mechanics.
  • Static Group: Similar to the Random group in its initial stage, a random value between 0 and 10 was assigned in the absence of historical data. However, once past scores existed, the participant’s reward was determined as the mean of the minimum and maximum previous values. This method introduces a fixed progression logic, providing more predictable feedback than random assignment, while still maintaining some variability.
  • Adaptive Group: This group received dynamically calculated scores based on five reward dimensions informed by user behavior and task context:
    Base Points (DIM_BP): Adjusted inversely according to the number of previous unique responses at the same Point of Interest (POI), to encourage spatial equity in data collection.
    Location-Based Equity (DIM_LBE): If a POI had fewer responses than the average across POIs, a bonus equivalent to 50% of base points was granted.
    Time Diversity (DIM_TD): A bonus or penalty based on participation at underrepresented time slots, encouraging temporal coverage. This was computed by comparing task submissions during the current time window versus others.
    Personal Performance (DIM_PP): Reflects the user’s behavioral rhythm. If a participant’s task submission interval improved relative to their own average, additional points were awarded.
    Streak Bonus (DIM_S): Rewards consistent daily participation using an exponential formula scaled by the number of consecutive participation days.
Each dimension was calculated independently and summed to produce the total reward. This adaptive method aims to reinforce behaviors aligned with the goals of spatial, temporal, and participatory balance in Citizen Science.
Participants could view their score for each completed task in real time, as well as their cumulative ranking on a dynamic public leaderboard. While modest material prizes were awarded at the end of the intervention, their function was explicitly framed as recognition, not as the primary motivational driver. The gamification was implemented through:
  • Points: Awarded for each completed survey and photo submission. Participants could see their points both before and after each task.
  • Leaderboard: A public, real-time leaderboard displayed cumulative points and fostered social comparison.
  • Material prizes: In recognition of participation, eight material prizes (valued between €25–€30 each) were made available at the end of the study. These included LED desk lamps with wireless chargers, high-capacity USB 3.0 drives, wireless earbuds, and external battery packs (27,000mAh, 22.5W). The purpose of these rewards was to acknowledge and appreciate participants’ involvement, rather than to act as the primary incentive for participation. All prizes were communicated transparently to participants prior to the intervention, with an emphasis on their role as a token of appreciation rather than as competition drivers.

3.2.5. Technical Support and Compliance

A technical support form was available throughout the experiment. Only three support requests were received, all resolved within minutes. All activities and data were managed digitally; location and timestamps were logged automatically to ensure data quality and behavioural traceability.

3.2.6. Engagement Assessment

Engagement was assessed by:
Quantitative data: Task completion logs and the GAMEFULQUEST post-test scale (focusing on accomplishment and immersion dimensions) were gathered [51].
Qualitative data: Semi-structured interviews (conducted remotely after the intervention) were carried out, exploring motivation, perceptions of gamification, and the impact on engagement.
This structure ensured a robust yet feasible field experiment, with minimal logistical friction and maximized data integrity.

4. Results and Analysis

4.1. Participant Profile: Demographics and Participation Rates

A total of 7 registered participants (≈15%) engaged in at least one complete set of geo-located tasks during the five-day intervention, thus forming the analytical sample for behavioural and engagement analyses. However, only 5 of these participants completed the post-intervention survey assessing perceived accomplishment and immersion. The demographics of the active participants were very similar to those of the larger group, which helps confirm that our later comparisons are reliable. The overall participation rate (active contributors/registrants) was 15%, which aligns with rates reported in similar campus-based Citizen Science interventions [28].

4.2. Quantitative Findings

4.2.1. Distribution of Engagement, Accomplishment, and Immersion

Post-experiment engagement was assessed using validated subscales (accomplishment and immersion) from the GAMEFULQUEST instrument. The standard deviations for each survey item are depicted in Figure 6, which summarizes participant ratings across the two key engagement dimensions.
Accomplishment (M = 5.4, SD = 1.3): Items reflecting a sense of goal-directedness and striving for improvement received consistently high ratings (e.g., “It motivates me to progress and get better”, “It makes me feel like I have clear goals”), while items associated with self-assessment and performance standards showed moderate variance.
Immersion (M = 2.8, SD = 1.1): Most participants reported only moderate immersion, with higher variability on items relating to emotional involvement and separation from the real world. Only one participant approached the theoretical maximum on immersion indicators, suggesting that the gamified intervention was effective in promoting goal-oriented engagement but less so in generating flow-like absorption.

4.2.2. Notable Individual Differences and Trends

A radar plot of psychometric profiles (Figure 7) illustrates heterogeneity in how participants experienced the intervention. While accomplishment was uniformly elevated, immersion scores showed considerable spread, with some users experiencing marked “flow” and others remaining largely unaffected.

4.2.3. Association Between Task Completion and Engagement

Boxplots (Figure 8 and Figure 9) visualize the relationship between the number of tasks completed and mean accomplishment or immersion scores. Due to the small sample size, only the “1–2 tasks” group is represented. Still, clear trends are visible: participants who completed more tasks reported higher accomplishment and slightly elevated immersion, though the latter dimension displayed greater variability. Outliers in both directions (i.e., highly engaged but minimally active, or vice versa) suggest that engagement is shaped by the quantity of participation and individual motivational drivers.

4.2.4. Data Visualizations

4.3. Qualitative Insights

To complement quantitative findings and gain a richer understanding of the participant experience, we conducted a thematic analysis of post-experiment interviews. This qualitative inquiry was designed to uncover the nuanced motivational, behavioural, and emotional pathways that shaped engagement throughout the gamified Citizen Science intervention. By systematically coding interview transcripts, we identified recurring themes relating to initial motivations, perceived barriers, the impact of specific gamification elements, and the broader social-emotional context of participation. The following synthesis presents the main themes and illustrative quotations and subsequently contrasts these qualitative insights with the patterns observed in quantitative data. This integrated approach provides a comprehensive account of how and why university students engaged with the platform, as well as the practical and psychological factors influencing their sustained participation.

4.3.1. Motivations for Participation

  • Curiosity and Novelty: Several participants expressed initial curiosity about participating in a real-world experiment using a digital platform. One stated: “I wanted to see how it worked and if the platform would really motivate me to do the tasks.”
  • Desire to Contribute: There was a recurrent theme of wanting to contribute to environmental improvement on campus: “It felt good to think our observations could help the university or city get better.”
  • Appreciation for Recognition: Some cited the value of being recognized or rewarded, even if modestly: “I participated more because I knew there was a leaderboard and some prizes, but not only for that.”

4.3.2. Barriers and Constraints

  • Time Management: All participants mentioned that balancing the experiment with their academic workload was a challenge: “Some days I just forgot or was too busy to go to the POIs.”
  • Repetitiveness and Task Fatigue: A few noted the repetitive nature of tasks as a demotivating factor by the end of the week: “At first it was fun, but by day three it felt like doing the same thing.”

4.3.3. Perceptions of Gamification

  • Leaderboard and Points: Most reported that seeing their position on the leaderboard was a motivator for continued participation, but only up to a point: “I liked checking if I was going up, but when I saw I couldn’t catch up, I just did it for myself.”
  • Prizes as Acknowledgment: Participants did not view the material prizes as the main incentive, but as a positive gesture: “I would have done it anyway, but it was nice to have a little prize at the end.”
  • Fairness and Engagement: Some voiced that the system was fair because everyone had the same opportunity each day but also suggested ways to make the game more dynamic, such as varying tasks or giving surprise bonuses.

4.3.4. Social and Emotional Aspects

  • Sense of Community: In the interview with two participants, there was discussion of sharing experiences with classmates, even if there was no formal team component: “We talked about it in class, comparing our scores and photos.”
  • Enjoyment and Frustration: While most described the experience as “fun” or “interesting,” minor frustrations included technical glitches and lack of immediate feedback after submissions.

4.4. Integration of Results

The integration of quantitative and qualitative findings provides a robust, multidimensional view of how gamified elements shape engagement, accomplishment, and immersion in location-based Citizen Science among university students.
Quantitative results established that accomplishment—reflecting goal-oriented motivation and perceived progress—was consistently high across participants, as evidenced by elevated mean scores on the GAMEFULQUEST subscale. This is strongly echoed in interview themes: participants described a sense of satisfaction in completing tasks, a desire to contribute positively to their environment, and a general appreciation for being recognized through the platform’s feedback mechanisms. The leaderboard and point systems acted as immediate, visible markers of achievement, reinforcing the high accomplishment scores reported.
In contrast, immersion scores were more variable and generally moderate. This pattern was illuminated in qualitative accounts, where several participants described the tasks as initially engaging but increasingly repetitive by the end of the week. The lack of narrative variation or adaptive feedback contributed to diminished emotional absorption, aligning with the lower and more dispersed quantitative immersion scores.
Motivational pathways proved multifaceted. While gamification elements such as points, leaderboards, and small prizes generated initial enthusiasm and healthy competition, intrinsic motivators—such as personal interest, curiosity, and a sense of civic contribution—emerged as dominant sustaining factors. Participants emphasized that while external rewards were appreciated, they were not the principal drivers of engagement. These findings support the central tenets of Self-Determination Theory [52], particularly the importance of autonomy and competence in fostering lasting involvement.
Barriers to sustained participation—including academic workload, forgetfulness, and task repetitiveness—were reported in both data streams. Interviewed students specifically cited time constraints and the challenge of integrating participation into daily routines, which is consistent with the modest participation rate and the observed drop-off in task completion after initial days. Suggestions for improvement, such as more varied tasks and adaptive motivational messages, provide actionable directions for future platform iterations.
Overall, the triangulation of data reveals that while gamification successfully enhanced participants’ sense of accomplishment and initial engagement, its impact on deeper immersion was limited by task design and contextual constraints. The findings highlight the importance of balancing extrinsic motivators with opportunities for intrinsic satisfaction and underline the need for adaptive, user-centred design to sustain engagement in real-world Citizen Science settings.

5. Discussion

5.1. Interpretation of Findings

The present study provides nuanced empirical evidence regarding the impact of gamification on participant engagement, perceived accomplishment, and immersion within the context of a spatial crowdsourcing in Citizen Science intervention among university students. Quantitative analyses revealed that game elements—primarily points, leaderboards —significantly elevated participants’ sense of accomplishment, with consistently high scores on goal-orientation and performance subscales. However, immersion, conceptualized as flow-like absorption in the task, was more variable and generally moderate across the sample.
These findings align with the predictions of Self-Determination Theory [52], which posits that engagement flourishes when activities fulfil basic psychological needs for competence, autonomy, and relatedness. The gamification mechanics employed in GREENCROWD appeared particularly effective in satisfying the need for competence, as reflected in participants’ reported satisfaction with progression, recognition, and achievement. Nevertheless, qualitative feedback highlighted that the repetitive structure and lack of narrative diversity limited the emergence of deep immersion. This result echoes prior studies emphasizing the critical role of adaptive, personalized feedback and contextual variation for sustaining flow states in gamified environments.
Intrinsic motivators — curiosity, civic contribution, and personal interest—emerged as key sustaining factors. While extrinsic rewards provided an initial engagement boost, most participants reported that these were secondary to the internal satisfaction of completing tasks and contributing to campus improvement. This pattern mirrors recent meta-analytic evidence [53] and suggests that gamification is most effective when it scaffolds, rather than supplants, intrinsic motivation.

5.2. Practical Implications

For the design of Citizen Science platforms, these results underscore the importance of integrating game mechanics that foster competence and provide visible, meaningful feedback, while also attending to the diversity and adaptability of task design. Leaderboards and point systems can be robust in maintaining early engagement, but sustaining long-term participation likely requires greater narrative richness and the possibility for users to shape their experience—be it through task variety, personalized feedback, or social features.
At the university level, interventions seeking to mobilize digitally literate, yet time-constrained, student populations may benefit from “lightweight” gamification layers emphasizing contribution recognition and community impact. Ensuring transparency in the reward system, minimizing competition for high-value prizes, and supporting intrinsic drivers such as campus stewardship are all recommended to maximize participation and data quality.

5.3. Methodological Reflections

This study demonstrates the value of a convergent mixed-methods approach, particularly in small-N, exploratory research contexts. By triangulating psychometric instruments, and semi-structured interviews, we achieved both breadth and depth of insight—overcoming the limitations of any single data source. Using validated measures (e.g., GAMEFULQUEST) ensured psychometric rigor, while qualitative inquiry provided the necessary context to interpret individual variation and identify mechanisms underpinning the observed trends. This methodological pluralism is increasingly recognized as best practice in the study of complex interventions in Citizen Science and digital engagement [42,51].

5.4. Limitations

Several limitations must be acknowledged. First, the modest sample size and the use of a single university setting restrict the generalizability of findings. The observed effects may reflect idiosyncratic characteristics of the local context, cohort, or institutional culture. Second, potential self-selection and response bias may have inflated estimates of engagement or masked fewer positive experiences; for example, more motivated or tech-savvy students may have been disproportionately likely to participate and complete post-experiment surveys. Third, the near-universal digital proficiency of the sample precludes direct inference to populations with lower technology familiarity.

5.4. Future Research

Future research should pursue larger-scale, longitudinal studies across multiple campuses or community contexts to assess the robustness and sustainability of gamification effects over time. Comparative designs, including non-gamified control groups, will be critical for disentangling the specific drivers of engagement and data quality. Investigating adaptive gamification—where feedback, challenges, and rewards evolve in response to user behaviour and preferences—represents a promising direction for maximizing both inclusivity and effectiveness. Finally, exploring the intersection of gamification with social, ethical, and equity concerns remains vital as Citizen Science platforms scale and diversify. In contexts where the crowdsourced topic directly affects participants daily lives - such as noise pollution , air quality crisies or local infrastucture issues - intrinsic motivation tends to be higher. In such cases, gamification may shift from stimulating interest to sustaining engagement and enhancing data quality.

6. Conclusions

This study provides timely, empirical evidence regarding the impact of gamification on engagement, accomplishment, and immersion in spatial crowdsourcing. Our mixed-methods field experiment with university students reveals that carefully integrating game elements—such as points, daily-streak bonuses, and real-time leaderboards—can substantially elevate participants’ sense of accomplishment and goal-directed engagement. However, the translation of these mechanics into more profound immersive “flow” experiences remains limited, highlighting the challenge of sustaining emotional and cognitive absorption over time in repetitive, real-world data collection settings.
The central hypothesis, that introducing game elements would produce significant gains in engagement and perceived accomplishment relative to expectations for non-gamified activities, finds partial support. Quantitative and qualitative results demonstrate that gamified feedback mechanisms boost initial participation and motivation, particularly by fulfilling needs for competence and recognition (RQ1, RQ2). Yet, these effects are modulated by intrinsic motives—such as the desire to contribute to campus or community—and are tempered by barriers including task fatigue (RQ3). The sustainability of engagement thus appears contingent on a dynamic balance between extrinsic and intrinsic drivers and on the diversity and adaptability of platform design.
For research, these findings underscore the value of mixed-methods evaluation in unpacking not just the “how much” but also the “why” and “for whom” of gamification impacts. Methodological pluralism—combining psychometric assessment with qualitative insights—proves critical for understanding both the affordances and the boundaries of gamification in Citizen Science. Future studies should extend these insights to more diverse populations and longer-term interventions, with a focus on adaptive, user-centred mechanics.
For practice, our results offer clear guidance to designers of Citizen Science platforms and tech-savvy digital interventions. Game mechanics should be deployed not merely as superficial add-ons, but as thoughtfully integrated features that reinforce competence, provide transparent and meaningful feedback, and recognize contributions equitably. Attention must also be paid to minimizing barriers, refreshing task variety, and cultivating a sense of community, all of which are essential for maintaining both engagement and data quality at scale.
Ultimately, while gamification holds considerable promise for broadening participation and enhancing the user experience in spatial crowdsourcing, its effectiveness depends on design nuance, context sensitivity, and ongoing evaluation. As digital citizen observatories continue to proliferate in smart cities and academic contexts alike, evidence-based gamification strategies will be indispensable for transforming episodic volunteering into sustainable, impactful civic engagement.

Author Contributions

Conceptualization, F.V.-B., D.L.-d.-I, M.E., C.O.-R. and Z.K.; methodology, F.V.-B., D.L.-d.-I and C.O.-R.; software, F.V.-B.; validation, F.V.-B. D.L.-d.-I, M.E., C.O.-R., Z.K., K.S. and J. P.-A.; formal analysis, F.V.-B.; investigation, F.V.-B., M.E. and C.O.-R.; resources, D.L.-d.-I and M.E.; data curation, F.V.-B.; writing—original draft preparation, F.V.-B. and D.L.-d.-I writing—review and editing, F.V.-B., D.L.-d.-I, M.E., C.O.-R., Z.K., K.S. and J. P.-A; visualization, Felipe; supervision, D.L.-d.-I and C.O.-R.; project administration, F.V.-B. and D.L.-d.-I; funding acquisition, D.L.-d.-I.

Funding

This research was funded by the European Union’s Horizon Europe GREENGAGE project (Grant ID 101086530), the DEUSTEK5 project (Grant ID IT1582-22), and the Basque University System’s A-grade Research Team Grant.

Data Availability Statement

The dataset generated and analyzed during this study is publicly available at: https://doi.org/10.5281/zenodo.15387354. The source code of the spatial crowdsourcing platform used in the experiment (GREENCROWD) is available at: https://github.com/fvergaracl/greencrowd. The gamification engine implemented in the study is accessible at: https://github.com/fvergaracl/game.

Acknowledgments

We sincerely thank all the university students who participated in the field experiment and contributed their time and insights to this study. We are also grateful to the GREENGAGE project for supporting the development and deployment of the experimental platform.

Conflicts of Interest

The authors declare no conflicts of interest

Abbreviations

SDG Sustainable Development Goals
VTAE Volunteer Task Allocation Engine
GDPR General Data Protection Regulation
POI Point of Interest
WoS Web of Science
PBL Points, Badges and Leaderboards
GAMEFULQUEST Gameful Experience Questionnaire (validated psychometric scale)
IQR Interquartile Range (statistical measure)
M Mean (statistical average)
SD Standard Deviation (statistical measure)
RQ Research Question
IAM Identity and Access Management
QUAN Quantitative (in mixed-methods research design)
QUAL Qualitative (in mixed-methods research design)

Appendix A. Assigned Task at Each Point of Interest

Part 1: Initial Observations
1. How would you describe the environment at this point? *
🔘 1 (Poor condition)
🔘 2
🔘 3
🔘 4
🔘 5 (Excelent condition)
2. How would you describe the environment at this point? *
  • Select All
  • 🗑️ Litter and garbage
  • 🚗 Air pollution or traffic congestion
  • 🔊 Noise from vehicles or people
  • 🌱 Lack of green areas or trees
  • 💧 Water puddles, drains or flooding
  • 🧱 Damaged infrastructure (e.g., benches, paths)
  • ⚠️ Safety concerns
  • ❌ None of the above
  • None
  • Other (describe)
  • As part of
3. How often do you use this space or pass by it? *
🔘 🕒 Daily
🔘 📆 A few times a week
🔘 🔁 Occasionally
🔘 🚫 Never before
Part 2: Visual Evidence
4. Take or upload a photo of this location that reflects its current condition. *
📤Upload a photo
Part 3: Final Reflections
5. What ideas or actions would help to improve this place? *✍️ Write your suggestions below (e.g., cleaning, planting trees, installing signs, etc.):
____________________________________________________________________
6. Would you like to be part of future student-led or community-driven projects to improve urban spaces? *
🔘 ✅ Yes, I’m interested
🔘 🤔 Maybe, I need more info
🔘 ❌ No, not at this time
* Field required

Appendix B. Experiment Protocol: Evaluation of Gamification Impact in GREENCROWD

1. Justification and Ethical Considerations
Rationale for the Experiment:
As part of the previously approved project, we propose the inclusion of a complementary experiment focused on evaluating the impact of the GREENCROWD gamification strategy on user participation in collaborative tasks. The experiment will be integrated in a controlled and structured manner, fully aligned with established ethical principles.
Why is this experiment being added?
The primary objective is to empirically validate, through simulation and statistical analysis, the effect of the various dimensions of the gamification system on participant behavior. This study aims to:
-
Understand how reward mechanisms (base points, geolocation equity, time diversity, personal performance, and participation streaks) influence participant motivation.
-
Identify whether certain elements of the system may generate unintended effects (e.g., inequality in reward distribution or demotivation among specific participant profiles).
-
Improve the quality and fairness of the system prior to large-scale implementation.
Added Value:
This experiment contributes:
-
A systematic analysis of participatory behaviour under different reward conditions.
-
Quantitative evidence on the effectiveness of the system’s design.
-
A scientific foundation for adjusting or scaling the gamification model, in alignment with principles of fair and inclusive participation.
Ethical Risk Assessment:
No significant additional ethical risks are anticipated. The following considerations are addressed:
  • Voluntary participation: For real participants, explicit informed consent will be obtained as per the approved protocol.
  • Privacy and anonymity: The experiment may utilize simulated or anonymized data. GREENCROWD does not collect email addresses, only a unique participant identifier. If real data is used, all approved safeguards (anonymization, pseudonymization, and restricted access) will be applied.
  • Right to withdraw: Participants may withdraw at any time by referencing their unique ID, with all associated data deleted.
  • Data scope: No additional sensitive data will be collected, and the original data processing purpose remains unchanged.
  • Use of results: Results are solely for scientific evaluation and system improvement; there are no individual negative consequences.
  • No adverse consequences: The gamification system does not impact access to external resources or services. Any modifications will be based on fairness and equity.
2. Overview of GREENCROWD Data Collection
In alignment with GREENGAGE objectives, GREENCROWD’s procedures are as follows:
  • Consent Form:
The consent form mirrors that of GREENGAGE, providing equivalent conditions for withdrawal at any time.
b.
Socio-Demographic Survey:
Participants complete the same socio-demographic questionnaire as in the main project.
c.
Data Collection:
Only data generated during active application use is collected (no passive tracking).
d.
Post-Experiment Engagement Survey:
To measure engagement with the gamified platform, participants complete the following validated questionnaire (GAMEFULQUEST, adapted for the GREENCROWD context):
3. Engagement Questionnaire: GAMEFULQUEST (Accomplishment & Immersion Dimensions)
Instructions:
“Please indicate how much you agree with the following statements, regarding your feelings while using GREENCROWD as a tool for [language learning/data collection, adapt as needed].”
Each question is answered using a 7-point Likert scale:
(1) Strongly disagree, (2) Disagree, (3) Somewhat disagree, (4) Neither agree nor disagree, (5) Somewhat agree, (6) Agree, (7) Strongly agree.
Accomplishment
-
It makes me feel that I need to complete things.
-
It pushes me to strive for accomplishments.
-
It inspires me to maintain my standards of performance.
-
It makes me feel that success comes through accomplishments.
-
It makes me strive to take myself to the next level.
-
It motivates me to progress and get better.
-
It makes me feel like I have clear goals.
-
It gives me the feeling that I need to reach goals.
Immersion
-
It gives me the feeling that time passes quickly.
-
It grabs all of my attention.
-
Gives me a sense of being separated from the real world.
-
It makes me lose myself in what I am doing.
-
Makes my actions seem to come automatically.
-
It causes me to stop noticing when I get tired.
-
This causes me to forget about my everyday concerns.
-
It makes me ignore everything around me.
-
It gets me fully emotionally involved.
4. Additional Interview Guide (Optional, for Qualitative Analysis)
Format: Individual semi-structured interview, estimated duration 20–30 minutes.
Purpose: To triangulate quantitative data and gain deeper insight into user motivation, experience, and perceptions of gamification.
Data Management and Ethics Statement:
All data will be handled according to GDPR and local regulations. The study will be conducted under the oversight of the relevant ethics committee, and all participants will be informed of their rights and data protection measures.
Contact Information:
For questions or withdrawal, participants may contact the project team at felipe.vergara@deusto.es.

Appendix C. Post-Experiment Interview Protocol

Description
-
Estimated Duration: 20–30 minutes
-
Format: Individual semi-structured interview
-
Type: Semi-structured (flexible follow-up based on participant answers)
Interview Guide
Section 1: Introduction & General Experience
-
Can you briefly describe your overall experience participating in the experiment?
-
Did you participate every day, only on some days, or sporadically? What influenced your level of participation?
Purpose: To gather general impressions and explore consistency in participation.
Section 2: Motivation & Participation (Self-Determination Theory: autonomy, competence, relatedness)
What motivated you to take part in this experiment?
-
Did you feel you had the freedom to decide when and how to participate? (Autonomy)
-
Did you feel that you improved or developed skills throughout the activity? (Competence)
-
Did you feel part of a community or connected with other participants? (Relatedness)
Section 3: Perceptions of Gamification
-
Do you recall the game elements used in the experiment (e.g., points, rewards, leaderboard)? What were your thoughts about them?
-
Did any of these game elements motivate you to participate more or engage more deeply? Which ones and why?
-
At any point, did you feel the game elements distracted you from the core purpose of the activity?
-
Would you have participated the same way without the gamified elements? Why or why not?
Section 4: Strategies, Behavior & Data Quality
-
When completing tasks, was your main focus on doing them accurately or completing them quickly to gain rewards?
-
Did you feel you were competing against others or more against yourself? How did that affect your behavior?
-
Were there moments when you repeated tasks to improve your score or bonus? Why?
Section 5: Enjoyment & Gameful Experience
-
Would you describe the experience as enjoyable or fun? What made it so (or not)?
-
Were there moments that frustrated you or made you lose interest? What were they?
-
Did you feel recognized or appreciated for your contributions (e.g., through feedback, scores, or rankings)?
-
Would you like to see gamified approaches like this used in other projects or classes? Why or why not?
Section 6: Suggestions & Final Thoughts
-
Would you change anything about the reward system or task design?
-
What would you improve to make the experience more meaningful or engaging?
-
Is there anything else you would like to share about your experience?

Appendix C. Example of Emails Sent to Participants During Campaign

Participant without activity on GREENCROWD
Preprints 160084 i001
Participant over top 25% of active participants
Preprints 160084 i002
Participant over top 10% of active participants
Preprints 160084 i003

References

  1. GREENGAGE - Engaging Citizens - Mobilizing Technology - Delivering the Green Deal | GREENGAGE Project | Fact Sheet | HORIZON. Available online: https://cordis.europa.eu/project/id/101086530 (accessed on 24 May 2024).
  2. Fraisl, D.; Hager, G.; Bedessem, B.; Gold, M.; Hsing, P.-Y.; Danielsen, F.; Hitchcock, C.B.; Hulbert, J.M.; Piera, J.; Spiers, H.; et al. Citizen Science in Environmental and Ecological Sciences. Nat. Rev. Methods Primer 2022, 2, 1–20. [Google Scholar] [CrossRef]
  3. Cappa, F.; Franco, S.; Rosso, F. Citizens and Cities: Leveraging Citizen Science and Big Data for Sustainable Urban Development. Bus. Strategy Environ. 2022, 31, 648–667. [Google Scholar] [CrossRef]
  4. de Sherbinin, A.; Bowser, A.; Chuang, T.-R.; Cooper, C.; Danielsen, F.; Edmunds, R.; Elias, P.; Faustman, E.; Hultquist, C.; Mondardini, R.; et al. The Critical Importance of Citizen Science Data. Front. Clim. 2021, 3, 650760. [Google Scholar] [CrossRef]
  5. Bonney, R. Expanding the Impact of Citizen Science. BioScience 2021, 71, 448–451. [Google Scholar] [CrossRef]
  6. Tengö, M.; Austin, B.J.; Danielsen, F.; Fernández-Llamazares, Á. Creating Synergies between Citizen Science and Indigenous and Local Knowledge. BioScience 2021, 71, 503–518. [Google Scholar] [CrossRef]
  7. Pocock, M.J.O.; Hamlin, I.; Christelow, J.; Passmore, H.-A.; Richardson, M. The Benefits of Citizen Science and Nature-Noticing Activities for Well-Being, Nature Connectedness and pro-Nature Conservation Behaviours. People Nat. 2023, 5, 591–606. [Google Scholar] [CrossRef]
  8. Wehn, U.; Gharesifard, M.; Ceccaroni, L.; Joyce, H.; Ajates, R.; Woods, S.; Bilbao, A.; Parkinson, S.; Gold, M.; Wheatland, J. Impact Assessment of Citizen Science: State of the Art and Guiding Principles for a Consolidated Approach. Sustain. Sci. 2021, 16, 1683–1699. [Google Scholar] [CrossRef]
  9. Citizen Science: Public Participation in Environmental Research. In Citizen Science; Cornell University Press, 2012; ISBN 978-0-8014-6395-2.
  10. Groom, Q.; Pernat, N.; Adriaens, T.; de Groot, M.; Jelaska, S.D.; Marčiulynienė, D.; Martinou, A.F.; Skuhrovec, J.; Tricarico, E.; Wit, E.C.; et al. Species Interactions: Next-Level Citizen Science. Ecography 2021, 44, 1781–1789. [Google Scholar] [CrossRef]
  11. Encarnação, J.; Teodósio, M.A.; Morais, P. Citizen Science and Biological Invasions: A Review. Front. Environ. Sci. 2021, 8, 602980. [Google Scholar] [CrossRef]
  12. Kishimoto, K.; Kobori, H. COVID-19 Pandemic Drives Changes in Participation in Citizen Science Project “City Nature Challenge” in Tokyo. Biol. Conserv. 2021, 255, 109001. [Google Scholar] [CrossRef]
  13. Nelms, S.E.; Easman, E.; Anderson, N.; Berg, M.; Coates, S.; Crosby, A.; Eisfeld-Pierantonio, S.; Eyles, L.; Flux, T.; Gilford, E.; et al. The Role of Citizen Science in Addressing Plastic Pollution: Challenges and Opportunities. Environ. Sci. Policy 2022, 128, 14–23. [Google Scholar] [CrossRef]
  14. Nardi, F.; Cudennec, C.; Abrate, T.; Allouch, C.; Annis, A.; Assumpção, T.; Aubert, A.H.; Bérod, D.; Braccini, A.M.; Buytaert, W.; et al. Citizens AND HYdrology (CANDHY): Conceptualizing a Transdisciplinary Framework for Citizen Science Addressing Hydrological Challenges. Hydrol. Sci. J. 2022, 67, 2534–2551. [Google Scholar] [CrossRef]
  15. Pocock, M.J.O.; Adriaens, T.; Bertolino, S.; Eschen, R.; Essl, F.; Hulme, P.E.; Jeschke, J.M.; Roy, H.E.; Teixeira, H.; Groot, M. de Citizen Science Is a Vital Partnership for Invasive Alien Species Management and Research. iScience 2024, 27, 108623. [Google Scholar] [CrossRef]
  16. Swinnen, K.R.R.; Jacobs, A.; Claus, K.; Ruyts, S.; Vercayie, D.; Lambrechts, J.; Herremans, M. ‘Animals under Wheels’: Wildlife Roadkill Data Collection by Citizen Scientists as a Part of Their Nature Recording Activities. Nat. Conserv. 2022, 47, 121–153. [Google Scholar] [CrossRef]
  17. Jones, A.; Earnest, J.; Adam, M.; Clarke, R.; Yates, J.; Pennington, C.R. Careless Responding in Crowdsourced Alcohol Research: A Systematic Review and Meta-Analysis of Practices and Prevalence. Exp. Clin. Psychopharmacol. 2022, 30, 381–399. [Google Scholar] [CrossRef]
  18. Hulbert, J.M.; Hallett, R.A.; Roy, H.E.; Cleary, M. Citizen Science Can Enhance Strategies to Detect and Manage Invasive Forest Pests and Pathogens. Front. Ecol. Evol. 2023, 11, 1113978. [Google Scholar] [CrossRef]
  19. Lotfian, M.; Ingensand, J.; Brovelli, M.A. The Partnership of Citizen Science and Machine Learning: Benefits, Risks, and Future Challenges for Engagement, Data Collection, and Data Quality. Sustainability 2021, 13, 8087. [Google Scholar] [CrossRef]
  20. Danielsen, F.; Eicken, H.; Funder, M.; Johnson, N.; Lee, O.; Theilade, I.; Argyriou, D.; Burgess, N.D. Community Monitoring of Natural Resource Systems and the Environment. Annu. Rev. Environ. Resour. 2022, 47, 637–670. [Google Scholar] [CrossRef]
  21. Curtis, V. Motivation to Participate in an Online Citizen Science Game: A Study of Foldit. Sci. Commun. 2015, 37, 723–746. [Google Scholar] [CrossRef]
  22. Tinati, R.; Luczak-Roesch, M.; Simperl, E.; Hall, W. Because Science Is Awesome: Studying Participation in a Citizen Science Game. In Proceedings of the Proceedings of the 8th ACM Conference on Web Science; Association for Computing Machinery: New York, NY, USA, 2016; pp. 45–54. [Google Scholar]
  23. Iacovides, I.; Jennett, C.; Cornish-Trestrail, C.; Cox, A.L. Do Games Attract or Sustain Engagement in Citizen Science? In A Study of Volunteer Motivations. In Proceedings of the CHI ’13 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2013; pp. 1101–1106. [Google Scholar]
  24. Bowser, A.; Hansen, D.; Preece, J.; He, Y.; Boston, C.; Hammock, J. Gamifying Citizen Science: A Study of Two User Groups. In Proceedings of the Proceedings of the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing; Association for Computing Machinery: New York, NY, USA, 2014; pp. 137–140. [Google Scholar]
  25. Simperl, E.; Reeves, N.; Phethean, C.; Lynes, T.; Tinati, R. Is Virtual Citizen Science A Game? Trans Soc Comput 2018, 1, 6–1. [Google Scholar] [CrossRef]
  26. Miller, J.A.; Gandhi, K.; Gander, A.; Cooper, S. A Survey of Citizen Science Gaming Experiences. Citiz. Sci. Theory Pract. 2022, 7, 1–12. [Google Scholar] [CrossRef]
  27. Delfine, M.; Muller, A.; Manners, R. Literature Review on Motivation and Incentives for Voluntary Participation in Citizen Science Projects. 2024.
  28. Martella, R.; Clementini, E.; Kray, C. Crowdsourcing Geographic Information with a Gamification Approach. Geod. Vestn. 2019, 63, 213–233. [Google Scholar] [CrossRef]
  29. Zolotov, M.J.N. Collecting Data for Indoor Mapping of the University of Münster Via a Location Based Game. M.S., Universidade NOVA de Lisboa (Portugal): Portugal, 2014.
  30. Vasiliades, M.A.; Hadjichambis, A.C.; Paraskeva-Hadjichambi, D.; Adamou, A.; Georgiou, Y. A Systematic Literature Review on the Participation Aspects of Environmental and Nature-Based Citizen Science Initiatives. Sustainability 2021, 13, 7457. [Google Scholar] [CrossRef]
  31. Bastos, D.; Fernández-Caballero, A.; Pereira, A.; Rocha, N.P. Smart City Applications to Promote Citizen Participation in City Management and Governance: A Systematic Review. Informatics 2022, 9, 89. [Google Scholar] [CrossRef]
  32. GREENGAGE | GREENGAGE Project.
  33. Puerta-Beldarrain, M.; Gómez-Carmona, O.; Chen, L.; López-de-Ipiña, D.; Casado-Mansilla, D.; Vergara-Borge, F. A Spatial Crowdsourcing Engine for Harmonizing Volunteers’ Needs and Tasks’ Completion Goals. Sensors 2024, 24, 8117. [Google Scholar] [CrossRef] [PubMed]
  34. Riley, J.; Mason-Wilkes, W. Dark Citizen Science. Public Underst. Sci. 2024, 33, 142–157. [Google Scholar] [CrossRef]
  35. Koffler, S.; Barbiéri, C.; Ghilardi-Lopes, N.P.; Leocadio, J.N.; Albertini, B.; Francoy, T.M.; Saraiva, A.M. A Buzz for Sustainability and Conservation: The Growing Potential of Citizen Science Studies on Bees. Sustainability 2021, 13, 959. [Google Scholar] [CrossRef]
  36. Shinbrot, X.A.; Jones, K.W.; Newman, G.; Ramos-Escobedo, M. Why Citizen Scientists Volunteer: The Influence of Motivations, Barriers, and Perceived Project Relevancy on Volunteer Participation and Retention from a Novel Experiment. J. Environ. Plan. Manag. 2023, 66, 122–142. [Google Scholar] [CrossRef]
  37. O’Grady, M.; O’Hare, G.; Ties, S.; Williams, J. The Citizen Observatory: Enabling Next Generation Citizen Science. Bus. Syst. Res. J. 2022, 12, 221–235. [Google Scholar] [CrossRef]
  38. Palumbo, R.; Manesh, M.F.; Sorrentino, M. Mapping the State of the Art to Envision the Future of Large-Scale Citizen Science Projects: An Interpretive Review. Int. J. Innov. Technol. Manag. 2022. [Google Scholar] [CrossRef]
  39. Stein, C.; Fegert, J.D.; Wittmer, A.; Weinhardt, C. Digital Participation for Data Literate Citizens – A Qualitative Analysis of the Design of Multi-Project Citizen Science Platforms. IADIS Int. J. Comput. Sci. Inf. Syst. 2023, 18, 1. [Google Scholar]
  40. Wehn, U.; Bilbao Erezkano, A.; Somerwill, L.; Linders, T.; Maso, J.; Parkinson, S.; Semasingha, C.; Woods, S. Past and Present Marine Citizen Science around the Globe: A Cumulative Inventory of Initiatives and Data Produced. Ambio 2025. [Google Scholar] [CrossRef] [PubMed]
  41. Robinson, D.K.R.; Simone, A.; Mazzonetto, M. RRI Legacies: Co-Creation for Responsible, Equitable and Fair Innovation in Horizon Europe. J. RESPONSIBLE Innov. 2021, 8, 209–216. [Google Scholar] [CrossRef]
  42. Phillips, T.B.; Ballard, H.L.; Lewenstein, B.V.; Bonney, R. Engagement in Science through Citizen Science: Moving beyond Data Collection. Sci. Educ. 2019, 103, 665–690. [Google Scholar] [CrossRef]
  43. Koivisto, J.; Hamari, J. The Rise of Motivational Information Systems: A Review of Gamification Research. Int. J. Inf. Manag. 2019, 45, 191–210. [Google Scholar] [CrossRef]
  44. Hedges, L.V. Distribution Theory for Glass’s Estimator of Effect Size and Related Estimators. J. Educ. Stat. 1981, 6, 107–128. [Google Scholar] [CrossRef]
  45. Sailer, M.; Homner, L. The Gamification of Learning: A Meta-Analysis. Educ. Psychol. Rev. 2020, 32, 77–112. [Google Scholar] [CrossRef]
  46. Aristeidou, M.; Herodotou, C.; Ballard, H.L.; Higgins, L.; Johnson, R.F.; Miller, A.E.; Young, A.N.; Robinson, L.D. How Do Young Community and Citizen Science Volunteers Support Scientific Research on Biodiversity? The Case of iNaturalist. Diversity 2021, 13, 318. [Google Scholar] [CrossRef]
  47. Najwer, A.; Jankowski, P.; Niesterowicz, J.; Zwoliński, Z. Geodiversity Assessment with Global and Local Spatial Multicriteria Analysis. Int. J. Appl. Earth Obs. Geoinformation 2022, 107, 102665. [Google Scholar] [CrossRef]
  48. Cho, S.; Hollstein, L.; Aguilar, L.; Dwyer, J.; Auffrey, C. Youth Engagement in Water Quality Monitoring: Uncovering Ecosystem Benefits and Challenges. Architecture 2024, 4, 1008–1019. [Google Scholar] [CrossRef]
  49. GitHub - Fvergaracl/Greencrowd. Available online: https://web.archive.org/web/20250309214115/https://github.com/fvergaracl/greencrowd (accessed on 9 March 2025).
  50. Vergara Borge, F. Fvergaracl/GAME 2024.
  51. Högberg, J.; Hamari, J.; Wästlund, E. Gameful Experience Questionnaire (GAMEFULQUEST): An Instrument for Measuring the Perceived Gamefulness of System Use. User Model. User-Adapt. Interact. 2019, 29, 619–660. [Google Scholar] [CrossRef]
  52. Deci, E.L.; and Ryan, R.M. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
  53. Johnson, D.; Deterding, S.; Kuhn, K.-A.; Staneva, A.; Stoyanov, S.; Hides, L. Gamification for Health and Wellbeing: A Systematic Review of the Literature. Internet Interv. 2016, 6, 89–106. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The experiment’s protocol diagram approved by the ethics committee.
Figure 1. The experiment’s protocol diagram approved by the ethics committee.
Preprints 160084 g001
Figure 2. GREENCROWD map interface showing the actives POIs and associated points rewards (a), and when user select point and create a route to reach there (b).
Figure 2. GREENCROWD map interface showing the actives POIs and associated points rewards (a), and when user select point and create a route to reach there (b).
Preprints 160084 g002
Figure 3. Tasks comprised three stages: environmental perception ratings (a), geotagged photo submissions (b), and engagement intent indicators with suggestions (c).
Figure 3. Tasks comprised three stages: environmental perception ratings (a), geotagged photo submissions (b), and engagement intent indicators with suggestions (c).
Preprints 160084 g003
Figure 4. Submitting a response to a task and getting points for it.
Figure 4. Submitting a response to a task and getting points for it.
Preprints 160084 g004
Figure 5. Leaderboard with the points of all participants, including the user’s own, are displayed anonymously – individual identities are not revealed (a). Below, activity charts visualize the user’s performance in comparison to the collective activity of other participants (b).
Figure 5. Leaderboard with the points of all participants, including the user’s own, are displayed anonymously – individual identities are not revealed (a). Below, activity charts visualize the user’s performance in comparison to the collective activity of other participants (b).
Preprints 160084 g005
Figure 6. The bar plot of mean scores and standard deviations for all GAMEFULQUEST items shows higher accomplishment than immersion across the board.
Figure 6. The bar plot of mean scores and standard deviations for all GAMEFULQUEST items shows higher accomplishment than immersion across the board.
Preprints 160084 g006
Figure 7. Radar plot of individual participant psychometric profiles, highlighting diversity in responses.
Figure 7. Radar plot of individual participant psychometric profiles, highlighting diversity in responses.
Preprints 160084 g007
Figure 8. Boxplot of mean accomplishment scores by grouped task completion, with superimposed jittered data points for clarity.
Figure 8. Boxplot of mean accomplishment scores by grouped task completion, with superimposed jittered data points for clarity.
Preprints 160084 g008
Figure 9. Boxplot of mean immersion scores by grouped task completion, similarly annotated.
Figure 9. Boxplot of mean immersion scores by grouped task completion, similarly annotated.
Preprints 160084 g009
Table 1. Summarizes the distribution of participants across predefined age brackets, along with corresponding life-stage categories.
Table 1. Summarizes the distribution of participants across predefined age brackets, along with corresponding life-stage categories.
Age Range (Years) Description n %
18–24 Young adults, university-age 38 77,55 %
25–34 Early adulthood 1 2,04 %
35–44 Mid-adulthood 0 0 %
45–54 Late adulthood 1 2,04 %
55–64 Pre-retirement or early senior years 0 0 %
65–74 Early retirement 0 0 %
75–84 Senior adults 0 0 %
Over 85 Elderly/advanced age 0 0 %
No declared No age range provided 9 18,37 %
Total 100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated