Preprint
Review

This version is not peer-reviewed.

Shaping Architecture with Generative Artificial Intelligence: Deep Learning Models in Architectural Design Workflow

A peer-reviewed version of this preprint was published in:
Architecture 2025, 5(4), 94. https://doi.org/10.3390/architecture5040094

Submitted:

28 August 2025

Posted:

01 September 2025

You are already at the latest version

Abstract
Deep-learning generative AI promises to transform architectural design, yet its potential employment and ready-to-use capacity for everyday workflows are unclear. This study systematically reviews peer-reviewed work from 2015–2025 to assess how GenAI methods align with architectural practice. Following database searches and subject-area filtering, 42 studies were included from 1,566 records. Each was evaluated with a five-indicator, three-tier rubric: Output Representation Type (ORT), Pipeline Integration (PI), Workflow Standardization (WS), Tool Readiness (TR), and Technical Skillset (TS). Results show outputs are concentrated in non-native formats (≈40% raster imagery; ≈45% meshes/voxels/graphs), with relatively few CAD/BIM-native results (≈15%). Toolchains are often fragmented (PI: ≈43% Tier-0 with ≥4 steps; ≈40% Tier-1 with 2–3 tools; ≈17% Tier-2 single-platform). Most studies map to schematic-only design stage (WS Tier-1 ≈69%), and multi-stage, CAD/BIM-compatible pipelines remain uncommon (WS Tier-2 ≈12%). Prototypes frequently require bespoke coding (TR Tier-0 ≈65%) and advanced expertise (TS Tier-0 ≈74%). These findings indicate a persistent gap between experimentation with ideation-oriented GenAI and the pragmatism of CAD/BIM-centered delivery. Advancing practice readiness will require native CAD/BIM outputs, tighter plug-in/API integration, tools that bridge heterogeneous file formats and metadata export, and packaging ML modules into CAD/BIM environments that lowers skill demands. Limitations include the academic focus of the corpus and rapid field evolution.
Keywords: 
;  ;  ;  

1. Introduction

1.1. Generative AI Models in Architectural Design

Generative design methodologies have a long history in architectural practice. Since the 1960s, architects have been employing rule-based generative systems such as pattern language [1], shape grammars [2], expert systems [3], and optimization techniques [4] to automate design processes and generate architectural form. Although these methods were initially met with skepticism [5,6], the generative design paradigm re-emerged in the 1990s, when architects used techniques and models like evolutionary algorithms [7,8,9,10], cellular automata [11,12] and multi-agent systems[13,14], to generate and optimize architectural form, shape and floor-plans in relation to functional, structural or environmental parameters. However despite their promising endeavors, an early study by Grobman et al.[6], showed that up till 2009 generative design methods were rare in mainstream architectural practice. Yet, recently, academic interest in generative design has resurfaced. Castro Pena et al. (2021) report an 85% increase in publications related to generative methods since 2015[15], reflecting a broader shift toward data-driven and approximation-based approaches enabled by recent advances in Deep Learning (DL) and Generative Artificial Intelligence (GenAI). Recent systematic reviews on GenAI in architectural design indicate a significant surge, particularly since 2020, in studies about the application of data-driven Deep Learning models. These models have emerged as central to a rapidly evolving research agenda for architectural design, which is primarily oriented toward the development of novel computational methods for early-stage concept imagery, three-dimensional massing, floor-plan generation, and urban-scale design.

1.2. Deep Learning-Based Generative AI

The rise of Deep Learning has brought renewed attention to GenAI systems -which diverge from earlier rule-based models, by learning patterns from large datasets[16]. Deep Learning (DL) is a subsymbolic associative machine learning method that encompasses artificial neural networks trained on large datasets, most commonly consisting of images and texts. Deep Learning generative systems employ Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Diffusion models, and Transformers. GANs comprise two neural networks, a generator which creates fake data and the discriminator, which learns to distinguish fake from real data, thus making the generator produce increasingly valid outputs. Conditional GANs are variants of GANs in which both the generator and discriminator are conditioned on additional information (such as class labels or other attributes), enabling the model to generate data that meet specific criteria or context. VAEs are generative models that consist of an encoder–decoder pair. VAEs generate new data by sampling from a learned latent space of input data representations, enforcing a probabilistic structure for smooth interpolation within the latent space. Diffusion Models are a class of generative models that gradually add noise to training data and learn to reverse this process to generate new data. They produce high-quality synthetic samples by iteratively denoising an initial random noise distribution seed. Transformers are neural network architectures based on self-attention mechanisms that can parallel process sequential data and thus achieve state-of-the-art performance in tasks such as NLP.[17]
While the use of DL models as generative design techniques signals a growing enthusiasm for AI-driven design experimentation, their practical utility for architectural design is an open question. This study addresses this issue by systematically reviewing recent literature and documented case studies to evaluate the relevance of data-driven generative techniques for architectural design. Specifically, it examines the degree to which DL-based GenAI methods can be meaningfully integrated into professional design workflows for everyday architectural practice. Besides DL generative models like the ones mentioned above, GenAI methods in this study also include DL models that are combined with other generative systems (such as agent-based models, genetic algorithms or shape grammars) that cooperatively produce the design outcome.

1.1. Previous Reviews

Reviews about GenAI methods in architectural design have significantly increased since 2020. These reviews discuss the problems architects face with GenAI models in the design process and propose strategies to overcome them. Some reviews see data-driven GenAI models as part of a wider field of generative design research that encompasses additional generative techniques, such as search-based and rule-based systems -most often genetic algorithms, shape grammars and cellular automata. Yet, some recent studies focus exclusively on the use and impact of data-driven DL-based GenAI models on architectural practice.
Castro Pena et al. (2021)[15] reviewed 75 scientific publications from 1995 onward to rank the frequency of occurrence of various types of generative design methods in the literature. The authors observe that DL-based AI generators for conceptual stage shape or floor plan optimization are a rising trend, although outweighed -in terms of study count- by interactive evolutionary computation and cellular automata. The authors implicitly suggest that DL methods are mostly research prototypes, with limited integration in conventional CAD/BIM architectural design workflows.
Bölek et al. (2023)[18] sorted generative methods into six classes in order to subsequently examine their application domains in architectural design. They argue that GenAI holds transformative potential for architectural practice—enabling new modes of design thinking, integrated performance analysis, and efficient solution exploration. They observe that in 242 reviewed papers, data-driven DL models are not as prevalent as other generative methods like evolutionary computation; yet they are an emerging and increasingly popular body of work in academic research. Citing numerous GenAI models, like GANs for floor-plan generation or façade design, they point out that workflow integration is generally prototype level. Realizing the potential of GenAI requires advances in accessibility, technical innovation, and interdisciplinary collaboration.
Vissers-Similon et al. (2024)[16] examined seven generative techniques for early-stage architectural design. Like Pena et al. and Bölek et al., Vissers-Similon et al. found that evolutionary computing was the most used method for form generation with DL generators sharply increasing since 2020. They suggest that Transformer models and Graph Machine Learning along with evolutionary computation have the greatest potential to impact the early stages of architectural design. Despite the fact that Transformer models are easy to slot into current ideation workflows for fast sketch and image generation -today limited to 2-D output- there is a growing research trend in employing GANs, VAEs and Diffusion models, for training and generating 3D datasets and configurations as well as direct integration into CAD/BIM environments.[19,20,21,22,23,24]
Zhuang et al.’s (2025)[25] research highlighted the fact that generative design techniques, which were traditionally based on rule-driven algorithms, have lately shifted towards machine learning models. These include both classic systems that encompass algorithms like Decision Trees and Random Forests, as well as data-driven DL models, like GANs and VAEs. Outcomes of this latter paradigm may range from raster images, topology graphs and vector shapes, to mesh models and rarely NURBS surfaces and solids -depending on the toolchain of the design workflow. This approach, as the authors argue, can eventually transform the practice of architecture by enabling efficient conceptual design exploration, complex performance evaluation, and simulation-informed creativity. Tighter workflow integration can be achieved by tactics, such as pairing smaller conditional GAN/VAE models with scripted constraints. However, the authors implicitly suggest that realizing the full potential of DL-based GenAI depends on resolving hurdles like heterogeneous AEC data formats, insufficient data availability, and lack of controllability and interpretability.
Li et al. (2025)[26] observed that architects are rather hesitant in using AI models in architectural design, with only occasional deployment, due to algorithmic complexity and the demand for specialized expertise. Despite architects’ awareness of generative AI’s potential, integration into everyday design practice remains narrow and uneven and when employed, it is mostly for image generation. The authors argue that research efforts should focus on mapping different generative tools across the entire workflow, as well as building unified, modular platforms where tools can be chained or swapped without rebuilding the pipeline. In addition, they propose embedding domain-specific requirements and regulation-aware, real-time, user-friendly features so that advanced GenAI can transform into an everyday, end-to-end design partner.
Lystbæk (2025)[17], in his review of ML and AI models in architectural design, suggests that, despite adoption barriers due to technical limitations, GenAI models can substantively enhance creativity and productivity across design stages while respecting the discipline’s standards and constraints. He observes that many architectural firms are experimenting with general-purpose AI platforms (e.g. Midjourney or ChatGPT) rather than industry-specific or custom-trained systems, which could have allowed a more reliable and domain-specific deployment in architectural design workflow. Although professional architects recognize the transformative potential of generative models -once their shortcomings are managed- concerns over AI “hallucinations”, unresolved issues of intellectual property and even the environmental footprint of large models, have mitigated their enthusiasm for broad deployment. The author suggests that AI models in architectural practice could be used for managing secondary and mundane tasks, such as automated documentation, data analysis, or early-stage massing studies, that hardly impact critical design decisions. Then large pre-trained models could be fine-tuned on domain-specific data, including validation metrics and output controls, to improve contextual awareness and potential incorporation into existing design workflows -such as coupling a custom floor-plan generator with BIM software.
These reviews show that DL-based GenAI methods are mostly used at the early stages of architectural design, for brainstorming ideas and exploring formal configurations, through images, 3D massing models and floor plan layouts. Yet integration of GenAI methods in later phases of design development is still limited, due to technical and practical barriers, as well as lack of specialized expertise and computational resources, scarcity of high-quality architectural datasets, concerns about outcome reliability and low interpretability in generated outputs. When DL-based GenAI models are used beyond the conceptual design stage, they remain stand-alone prototypes, with limited CAD/BIM workflow integration largely contingent upon bespoke scripting, custom bridges between heterogeneous platforms and file formats, as well as ad-hoc pipelines. The above reviews suggest a chasm between research prototyping and design workflow integration.

2. Research Methodology

2.1. Review and Assessment Method

The study employed a five-indicator system to assess the capacity of DL-based GenAI methods to efficiently map onto architectural design workflows. These indicators assess (1) the relevance of the type of generated output for architectural design, (2) the extent of seamless integration of design tools along the pipeline, (3) the extent to which the method used maps onto standard design workflow from schematic to construction documents stages, (4) the requirement for custom-made tools and scripting skills, and (5) the need for technical support/expertise. To this end the study reviewed 42 case studies selected from scientific journals, edited volumes and conference proceedings.
A search for literature on DL-based GenAI in architectural design was carried out in Scopus as well as several journal databases within the 2015-2025 timeframe. This review window was chosen because of the radical increase in publications on generative design since 2015, as well as the mainstream employment of DL models in architectural generative design practice – especially after 2020. The search was limited to academic journals, edited volumes and well-known conference proceedings because: (1) they represent the state of the art in deep learning generative AI in architectural design and routinely report on prototypes with potential professional uptake; (2) they allow for quality control – leading journals and conferences use rigorous peer review, so methods are well-documented; (3) they involve a comparable scope because they target early-stage generative workflows rather than purely construction-engineering papers, and (4) they allow for pragmatic assessment – because coding effort is realistic while still capturing the state-of-the-art. Industry case studies, which might be documented elsewhere (white papers, proprietary reports), may capture more highly scored methods and projects, but were outside our present scope.
The keywords used were “deep learning”, “architectural design” and “artificial intelligence”. Initial search in Scopus returned 1566 documents. This was further reduced to 860 by excluding irrelevant subject areas from the search. These comprised 463 journal articles, 360 conference papers and 37 book chapters. Only 42 of them were judged relevant to our research objective. Relevance was assessed according to the following criteria:
(1) they reported on some form of DL-based GenAI method for architectural design regardless of the degree of implementation.
(2) the methods focused predominantly on the generation of architectural form. Although form, in the context of architecture, is not easy to define, in the context of this study, form would involve conceptual representations of three-dimensional spatial configurations and shapes, volumes, spatial layouts and massing of buildings or structures. Techniques that generated building facades or floorplans were also selected as long as they implied or led to some three-dimensional configuration.
(3) papers that used optimization techniques (such as structural or energy efficiency and thermal comfort) were also included, as long as they imposed or significantly affected the generation of form.
The methodology combined qualitative indicators with quantifiable proxies to assess 42 case studies drawn from peer-reviewed journal articles, edited volumes and conference proceedings. To assess workflow integration, each case was evaluated according to five (5) workflow-integration indicators on a 3-Tier scale. These indicators assess the output format representation; the level of integration of the generative workflow pipeline; the alignment of techniques used with standard architectural design practice workflows; the need for scripting and custom-made tools; and the requirement for specialized knowledge and technical support from non-architects. The indicators are described more analytically below.
  • Output Representation Type (ORT). This indicator determines the final output format of the workflow. Workflow output may vary, from raster imagery (Tier-0), to voxels, topology graphs or mesh models (Tier-1) and vector-based CAD/BIM-native geometries (Tier-2). Output is considered to be the final outcome of the process, even when pipelines are customized for indirect conversion of images or meshes into CAD/BIM-ready formats.
  • Pipeline Integration (PI). The PI indicator assesses the extent to which the tools used along the pipeline of the workflow are integrated. When the workflow combines more than 4 loosely coupled tools with usually manual hand-offs studies score Tier-0. When two to three tools are linked via scripts or plug-ins studies score Tier-1. When a single platform or fully embedded plug-in is used, with no exports or imports, then studies score Tier-2.
  • Workflow Standardization (WS). Standard design workflows usually follow the Schematic Design/ Design Development/ Construction Documents (SD/DD/CD) pipeline, which indicates the typical phases of a design and construction project, commonly used in architecture and engineering. This is a structured approach that takes a project from initial concept to detailed construction plans, ensuring a systematic and organized process. In the context of our research, for papers to align with this standard scheme, they must output CAD/BIM native models coupled with site and code constraints, structural and environmental performance, comfort metrics documentation etc. These should be Tier-2 case studies, which almost always output parametric geometry, NURBS editable models or IFC. A mesh-based, raster-to-vector or raster-to-3Dmassing pipeline should be Tier-1. Studies in which only stylistic, conceptual and mood-board generation are present, stay at Tier-0.
  • Tool Readiness (TR). This indicator determines the requirement for heavy or light custom or off-the-shelf tools in the design workflow. Sometimes there is a demand for custom-made tools (such as python scripts) and heavy bespoke programming essential for dataset training and further design development. These are Tier-0 studies. Tier-1 studies introduce occasional short scripts, visual code or macros, while Tier-2 studies do not require heavy programming or coding often because they use off-the-shelf UIs.
  • Technical Skillset (TS). This indicator assesses the requirement for technical expertise beyond typical architectural skillsets, including technical support from programmers and computer scientists. Standard architectural skillsets do not go beyond mainstream digital drafting or visual coding software and parametric modelling plugins. Tier-0 studies should be those that require the competent skills of data scientists and engineers, such as heavy deep ML/RL expertise, as well as Python/C#/API scripting and GPU management. Moderate use of algorithmic node-graph editors (like Grasshopper and Dynamo) and off-the-shelf API bridges, light scripting and plug-in configuration skills, should indicate Tier-1 studies. Studies that operate with familiar CAD/BIM or prompt-based web UIs, not requiring scripting or any kind of model training, should be Tier-2.
Table 1. presents the 3-Tier scale for each indicator.

2.2. Results

A total of 42 peer-reviewed studies on DL-based GenAI methods in architectural design were evaluated in terms of five indicators: ORT, PI, WS, TR and TS. Three discrete representation Tiers were used for each indicator. The titles, publication details and reference list number of 42 selected studies are listed chronologically in Table 2, along with their scores on a 3-Tier scale for each indicator.
  • Output Representation Type (ORT)
Table 3. shows the 3-Tier scale score and rationale for each study for ORT indicator.
Table 4 shows the number and % distribution of studies for each Tier score for the ORT indicator. Seventeen (17) papers scored Tier-0 (40%). Their respective generative methods produced raster outputs (PNG/JPG), through text-to-image diffusion models and GANs (Stable-Diffusion, Midjourney, CycleGAN, Style GAN, VQGAN, etc.) or conceptual diagram generators. Typical examples include Lee (2025) and Danchenko (2021), where text-to-image diffusion or GAN models are leveraged for rapid visual ideation, mood board creation or conceptual stage exploration. Although these approaches excel at speed and accessibility, they offer no direct geometric hand-off, forcing designers to redraw or remodel outputs before schematic design can commence.
Tier-1 studies account for nineteen (19) papers almost as much as the raster output methods. Studies such as Sebestyén et al. (2023) and Eisenstadt et al. (2024) export density voxels, triangular meshes or layout topology graphs, either via three-dimensional format dataset GAN training (3DGANs) or by converting 2D image heat maps and signed distance fields via modelers like Rhino, SketchUp or Blender. These processes can be used for diagrammatic form conceptualization, syntactic exploration, early-stage massing, or environmental performance analyses without complete reconstruction. Yet, while they advance beyond pure imagery, they lack editable parametric geometry, layers, or metadata for CAD/BIM-ready modelling and demand further manual or scripted conversion for downstream design development.
Tier-2 studies (15%), accounted for only six (6) papers, as exemplified for example in Abdelmoula et al. (2024) and Okonta et al. (2025). Their methods manage to export AI-derived data into platforms like Rhino/Grasshopper and Revit to ultimately produce CAD/BIM-native geometries (such as editable NURBS surfaces/B-Reps/solids etc.), preserving semantic object attributes and material parameters. However, achieving this level of integration requires custom APIs and Python plug-ins, CAD-to-BIM bridges like Rhino.Inside or advanced middleware, like Autodesk Forge and Dynamo. Nonetheless, Tier-2 outputs demonstrably support multi-phase workflows, allowing concept, analysis and documentation to proceed without data loss.
  • Pipeline Integration (PI)
Table 5. shows the 3-Tier scale score and rationale for each study for PI indicator.
Table 6 shows the number and % distribution of studies for each Tier score for the PI indicator. Tier-0 and Tier-1 studies dominate the corpus. For 43% of the studies (Tier-0), workflow is fragmented into four or more distinct bits with usually manual hand-offs, such as separate AI training code (e.g. CycleGAN), and image-to-mesh post-processing methods. A typical sequence starts with Python script and a custom ML model, images then move to Photoshop for masking, then a module is used for image vectorization, vector files then move to Illustrator for refinement and then imported to Revit for redrawing. Fragmentation is especially pronounced where authors have to manually import raster output onto conventional drafting or post-processing tools, and having to deal with potential file-format loss, version mismatch, and human error.
Tier-1, which accounts for seventeen (17) papers, indicates moderate coupling between tools in the workflow, such as two or three components scripted end-to-end with automatic hand-offs but still requiring distinct applications. Examples include diffusion-height-map to Grasshopper mesh builder or VAE + ANN optimizers inside Rhino/Grasshopper to EnergyPlus batch run. Often, they employ API calls, Rhino.Inside bridge to Revit or Grasshopper components to bind two or three tools into a quasi-continuous pipeline. For example, Abdelmoula et al.’s SketchPLAN pipeline links raster image output (via Python trained cGAN and pix2pix segmentation) with Rhino and Revit to produce editable BIM elements. Such custom scripting strategies and CAD-to-BIM bridging modules function as pipeline “glues” that preserve the full content of model metadata while letting researchers exploit specialized engines that are absent from host CAD platforms. Yet, Tier-1 studies are still far from fully collapsing the entire workflow into a user-friendly interface with uninterrupted toolchain or even a single platform.
Only 17% of the corpus - seven (7) studies- attain Tier-2. In this case, studies employ a single design environment or plug-in, meaning that the designer works entirely inside a single UI, or through a bespoke pipeline. A single interface would deliver only 2D raster images, such as Zhang et al.’s ComfyUI node-graph GUI, which means they acquire Tier-0 on the ORT indicator. A bespoke development on the other hand, like Okonta et al.’s NLP-to-Revit system, demonstrates that deep integration is technically feasible but usually demands custom python scripts and add-ins, API wrappers, or extended visual code components. Nevertheless, Tier 2 workflows deliver the greatest downstream value: reduced translation effort, consistent parameter sets, and immediate compatibility with practice standards.
  • Workflow Standardization (WS)
Table 7. shows the 3-Tier scale score and rationale for each study for WS indicator.
Table 8 shows the number and % distribution of studies for each Tier score for the WS indicator. Eight (8) papers scored Tier-0 because they follow an experimental track where the generated output is interesting research material but cannot be mapped onto any typical architectural design workflow without wholesale re-work. Tier-0 studies showcase novel AI engines—GAN collages, diffusion images, RL sandboxes—yet explicitly state that outputs are “for inspiration only” or require manual redrawing before practical use.
Twenty-nine (29) papers scored Tier-1 because their respective workflow supports a single stage integration -typically concept sketching, façade mood-boarding, or massing studies- but hand-off to schematic design or BIM phase is manual. Tier-1 studies embed GenAI into one canonical task: early massing such as Sebestyén et al.’s density voxels, space-planning such as Eisenstadt et al.’s graph-based floor-plan topologies and Kakooee & Dillenburger’s RL layout, or interior mood-boarding such as Lee et al.’s prompt framework.
Only five (5) papers (12%) reach multi-stage integration in the workflow. These Tier-2 studies include Abdelmoula et al.’s SketchPLAN and Okonta et al.’s NLP-to-Revit bridge, both of which deposit native BIM objects that remain editable through documentation, and Veloso’s agent academy, which, when paired with a Grasshopper pipeline, can carry bubble diagrams into performance analysis without loss of semantics. Their method’s outputs feed directly into schematic design and carry usable data forward to design development with editable NURBS, or parametric Grasshopper definitions, so that design continues in standard BIM-friendly DD/CD pipelines.
  • Tool Readiness (TR)
Table 9. shows the 3-Tier scale score and rationale for each study for TR indicator.
Table 10 shows the number and % distribution of studies for each Tier score for the TR indicator. The highest proportion of studies (no = 27, 65%) scored Tier-0. These papers introduce custom ML training algorithms and datasets along with reward functions—diffusion volumes (Sebestyén et al., 2023), RL spatial agents (Veloso & Krishnamurti 2023, Wang & Snooks 2021), graph-GNN completion (Eisenstadt et al. 2024). Even when the generative engine itself is open-source (e.g., PyTorch-StyleGAN, Stable Diffusion), authors routinely append preprocessing, post-processing and evaluation code that lies outside commercial CAD/BIM ecosystems. The result is a patchwork of notebooks, scripts and API calls—powerful for experimentation but unusual in day-to-day practice.
Tier-1 papers, which account for one fourth of the corpus (n = 11, 26%), show a transitional pattern. Researchers use ready-made plug-ins (Karamba for Finite Element Methods, LunchBox-ML for regression, ComfyUI for diffusion images) adding only data flow guiding modules, like Grasshopper canvases or CSV import macros. Although flexible, these methods still assume some fluency in scripting and parametric modelling.
Only four studies achieved Tier-2. These methods (Celik 2024, Panaanen et al. 2023, Chen et al. 2023) usually exploit web-based text-to-image off-the-shelf services (Midjourney, Dall-E2) or open-source platforms (Stable Diffusion) with no need for coding -just prompt literacy. Although a no-code generative system might be gradually possible, currently off-the-shelf generative techniques that do not demand bespoke scripting are lightweight ideation tools that stop at raster imagery. Solutions that generate geometry with no scripting currently do not seem to exist.
  • Technical Skillset (TS)
Table 11. shows the 3-Tier scale score and rationale for each study for TS indicator.
Table 12 shows the number and % distribution of studies for each Tier score for the TS indicator. The dominance of Tier-0 studies (n = 31, 74%) reflects a field still driven by research prototypes. RL graph-based space planning, diffusion/3DGAN pipelines and optimizers coupled with performance modules, commonly require Python notebooks, custom data curation, GPU setup, and API bridges, that typically exceed the skillsets of architects. In practice, these methods imply interdisciplinary teams and outsourcing to ML specialists.
Tier-1 papers are 17% of the corpus. Workflow complexity is mitigated by parametric design environments and custom modules like Grasshopper with Karamba and ML components, or node-graph UIs such as ComfyUI. End-users still need to understand data flow, parameters, and plug-in interactions, but do not train models or write substantial code. This level is increasingly attainable for computationally literate practices.
Tier-2 remains rare and bifurcates into two types of studies. One is prompt-only ideation (text-to-image), which is accessible but delivers raster outputs with limited downstream utility. The other is compiled or tightly embedded add-ins (e.g., Revit-native tools or Rhino.Inside bridges packaged for end-users). These achieve genuine “no-code” operation inside common design software, yet demanding significant engineering investment up front—hence their scarcity in the corpus.

3. Discussion

In terms of output representation type (ORT), 85% of the studies in the corpus attain Tier-0 (raster imagery) and Tier-1 (voxel grids, graphs and meshes), implying that, at the moment, GenAI design tools operate mainly as visual ideation and conceptual or massing exploration aids. For more studies to achieve Tier-2 will require diffusion or GAN models capable of generating structured geometry for CAD/BIM downstream integration, or plug-and-play exporters that embed semantic metadata automatically. Tier-1 cases are usually conducted within web-based raster output diffusion services that naturally score high in tool readiness (TR). But both Tier-1 and some Tier-0 cases make significant efforts for output conversion, including mesh clean-ups, retopology, API bridges, mesh-to-NURBS and CAD-to-BIM conversion, raster-to-vector strategies and so on, highlighting a persistent chasm between research prototypes and mainstream practice. Although these methods achieve low in workflow standardization (WS), they showcase an emerging pattern, that of hybrid pipelines, that start with raster images and finish with parametric geometries without data loss.
Yet, pipelines tend to be fragmented, especially where researchers have to manually import raster output onto conventional drafting or post-processing tools and deal with potential file-format loss, version mismatch, and human error. For the pipeline integration indicator (PI), Tier-2 studies usually involve single web-based platforms like Stable Diffusion and Midjourney, but they score low on workflow standardization because they do not proceed beyond the conceptual and schematic design stage. Studies that integrate or directly embed GenAI models in CAD/BIM platforms are rare. For pipelines to achieve higher integration with less tool hand-offs, they would require advanced software-engineering resources and skills, overcoming licensing restrictions and open APIs. In this case, common ML tasks should be packaged into CAD/BIM environments pushing more workflows into Tier-2.
The dominance of Tier-1 papers (n = 29, 40%) in workflow standardization (WS), demonstrates that, with modest engineering effort, workflows can be easily mapped onto the schematic stage of design development. This commonly includes images, meshes, voxel grids and topology diagrams, for concept and mood board sketching, site massing exploration, room adjacency diagramming etc. For Tier-1 studies to be able to deliver recognizable hand-offs to the next design stage, it would require geometric re-modelling of the outcomes and possible restructuring of the workflow to account for code compliance, site metrics and construction limitations. Yet, some Tier-2 workflows (n = 5) demonstrate a sort of standard end-to-end design thinking that seems to bridge the gap between creative novelty and production pragmatism. Using native geometry exporters, metadata API bridges, custom scripting modules, and intricate visual code definitions, they achieve some level of seamless multi-stage alignment, indicating the dependence on custom tools, and the need for more technical expertise and computer engineering resources.
Indeed, for the tool readiness indicator (TR), two–thirds (no = 27, 65%) of the studies rely on substantial custom programming, ad hoc model training, and dataset curation efforts. This reflects both the novelty of the domain and the scarcity of ready-to-use domain-specific datasets and open-source modules for advanced ML integration in CAD/BIM environments. Only four (n = 4) studies use off-the-shelf tools but mostly employ prompt-based input and pretrained platforms like Stable Diffusion. One-fourth of the studies (n = 11, 26%) employ light custom or off-the-shelf code, or macros. Without proper infrastructure, bespoke coding will remain the norm, forcing sophisticated AI workflows to heavily rely on ML experts.
Thus, in terms of technical skillset (TS), three-quarters of the studies (n = 31, 74%) demand highly specialized ML expertise for dataset curation, labelling, and pre/post-processing often surpassing the need for running the model itself, GPU setup skills, and metadata management. Shifting Tier-0 to Tier-2 would need containerized components and plug-ins, robust GUIs with schema-aware nodes for common tasks, and shared datasets that will allow architects to operate sophisticated methods with ordinary professional skills. Until then, most high-performing GenAI methods in architectural design will remain dependent on specialist expertise.

4. Conclusion

This review evaluated 42 studies of DL-based GenAI methods for architectural design using a five-indicator framework—Output Representation Type (ORT), Pipeline Integration (PI), Workflow Standardization (WS), Tool Readiness (TR), and Technical Skillset (TS)—to assess how well these methods map onto architectural practice workflows. The corpus was assembled from Scopus and leading volumes, journals, and conferences proceedings (2015–2025) under clear inclusion criteria focused on form-generation.
Across studies, most GenAI outputs are not CAD/BIM-native representations: 40% of the outcomes are raster imagery (ORT Tier-0), 45% meshes/voxels/graphs, and only 15% CAD/BIM-native geometry. Toolchain pipelines are typically fragmented: 43% of cases use more than four loosely coupled steps (PI Tier-0), 40% link two to three tools, and 17% operate in a single platform or embedded plug-in. Most studies map onto the schematic design phase (WS Tier-1 = 69%), with rare multi-stage, BIM-compatible pipelines (WS Tier-2 = 12%). The dominant pattern is heavy bespoke coding (TR Tier-0 = 65%) and specialist skill requirements (TS Tier-0 = 74%). These findings substantiate a persistent chasm between ideation-oriented experimentation and mainstream CAD/BIM-based practice and delivery.
Closing this gap will require: shifting outputs from pixels to CAD/BIM geometry and building information metadata; compiling GenAI models into embedded modules, plug-ins and API bridges (e.g., Rhino.Inside/Revit add-ins) that minimize hand-offs; containerized components and schema-aware GUIs that can be managed by typical architectural skillsets; and shared datasets focused on practice limitations such as code compliance, environmental and structural metrics, etc. Two Tier-2 archetypes already visible—prompt-only ideation (accessible but raster-bound) and tightly embedded add-ins (practice-ready but engineering-intensive)—suggest a pragmatic development path.
Limitations of this study include a focus on academic sources (likely undercounting proprietary industry deployments), selection of mostly early-stage form-exploration studies in GenAI, and a fast-moving domain of design research. Nonetheless, the five-indicator rubric offers a practical workflow integration maturity index for tracking progress over time and across domains. Future work should focus on in-practice studies, studies that track GenAI in the DD/CD design stage, and AI methods that optimize downstream design development without overly advanced skill demands.

Abbreviations

The following abbreviations are used in this manuscript:
GenAI Generative Artificial Intelligence
ML Machine Learning
DL Deep Learning
RL Reinforcement Learning
ANN Artificial Neural Network
DNN Deep Neural Network
CNN Convolutional Neural Network
GAN Generative Adversarial Network
VAE Variational Autoencoder
IWGAN Improved Wasserstein GAN
cGAN Conditional GAN
DDQN Double Deep Q-Network
NLP Natural Language Processing
BIM Building Information Modelling
CAD Computer Aided Design
UI User Interface
GUI Graphical UI
GPU Graphics Processing Unit
API Application Programming Interface
PPO Proximal Policy Optimization
LoRA Low-Rank Adaptation

References

  1. Alexander, C. A Pattern Language: Towns, Buildings, Construction; Oxford University Press: Oxford, 1977. [Google Scholar]
  2. Stiny, G.; Gips, J. Shape Grammars and the Generative Specification of Painting and Sculpture. In Information Processing 71; NorthHolland: Amsterdam, 1972; pp 1460–1465.
  3. Gullichsen, E.; Chang, E. Generative Design in Architecture Using an Expert System. The Visual Computer 1985, 1, 161–168. [Google Scholar] [CrossRef]
  4. Gero, J.S. Architectural Optimization-A Review. Engineering Optimization 1975, 1, 189–199. [Google Scholar] [CrossRef]
  5. Negroponte, N. The Architecture Machine: Toward a More Human Environment; MIT Press: Cambridge, Mass, 1972. [Google Scholar]
  6. Grobman, Y.J.; Yezioro, A.; Capeluto, I.G. Computer-Based Form Generation in Architectural Design — A Critical Review. International Journal of Architectural Computing 2009, 7, 535–553. [Google Scholar] [CrossRef]
  7. Frazer, J. An Evolutionary Architecture; Architectural Association: London, 1995. [Google Scholar]
  8. Caldas, L.G.; Norford, L.K. A Design Optimization Tool Based on a Genetic Algorithm. Automation in Construction 2002, 11, 173–184. [Google Scholar] [CrossRef]
  9. Renner, G.; Ekárt, A. Genetic Algorithms in Computer Aided Design. Computer-Aided Design 2003, 35, 709–726. [Google Scholar] [CrossRef]
  10. Holland, B. Computational Organicism: Examining Evolutionary Design Strategies in Architecture. Nexus Netw J 2010, 12, 485–495. [Google Scholar] [CrossRef]
  11. Coates, P. Programming.Architecture; Routledge: London/New York, 2010. [Google Scholar]
  12. Herr, C.M.; Kvan, T. Adapting Cellular Automata to Support the Architectural Design Process. Automation in Construction 2007, 16, 61–69. [Google Scholar] [CrossRef]
  13. Jacob, C.; Von Mammen, S. Swarm Grammars: Growing Dynamic Structures in 3D Agent Spaces. Digital Creativity 2007, 18, 54–64. [Google Scholar] [CrossRef]
  14. Von Mammen, S.; Jacob, C. Swarm-Driven Idea Models – from Insect Nests to Modern Architecture. In Eco-Architecture. In Eco-Architecture II; WIT Press: Algarve, Portugal, 2008. [Google Scholar] [CrossRef]
  15. Castro Pena, M.L.; Carballal, A.; Rodríguez-Fernández, N.; Santos, I.; Romero, J. Artificial Intelligence Applied to Conceptual Design. A Review of Its Use in Architecture. Automation in Construction 2021, 124, 103550. [Google Scholar] [CrossRef]
  16. Vissers-Similon, E.; Dounas, T.; De Walsche, J. Classification of Artificial Intelligence Techniques for Early Architectural Design Stages. International Journal of Architectural Computing 2024, 14780771241260857. [Google Scholar] [CrossRef]
  17. Lystbæk, M.S. Machine Learning-Driven Processes in Architectural Building Design. Automation in Construction 2025, 178, 106379. [Google Scholar] [CrossRef]
  18. Bölek, B.; Tutal, O.; Özbaşaran, H. A Systematic Review on Artificial Intelligence Applications in Architecture. DRArch 2023, 4, 91–104. [Google Scholar] [CrossRef]
  19. Newton, D. Generative Deep Learning in Architectural Design. Technology|Architecture + Design 2019, 3, 176–189. [Google Scholar] [CrossRef]
  20. Sebestyen, A.; Özdenizci, O.; Legenstein, R.; Hirschberg, U. Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models. In Digital Design Reconsidered - Proceedings of the 41st Conference on Education and Research in Computer Aided Architectural Design in Europe (eCAADe 2023); Graz, Austria, 2023; Vol. 2, pp 451–460. [CrossRef]
  21. Ennemoser, B.; Mayrhofer-Hufnagl, I. Design across Multi-Scale Datasets by Developing a Novel Approach to 3DGANs. International Journal of Architectural Computing 2023, 21, 358–373. [Google Scholar] [CrossRef]
  22. Pouliou, P.; Horvath, A.-S.; Palamas, G. Speculative Hybrids: Investigating the Generation of Conceptual Architectural Forms through the Use of 3D Generative Adversarial Networks. International Journal of Architectural Computing 2023, 21, 315–336. [Google Scholar] [CrossRef]
  23. Mueller, L.-M.; Andriotis, C.; Turrin, M. Using Generative Adversarial Networks to Create 3D Building Geometries; Nicosia, 2024; pp 479–488. [CrossRef]
  24. Abdelmoula, I.; Schulz, J.-U.; Da Silva Lopes Vieira, T. SketchPLAN Recognition and Vectorization of Floor Plan Sketches for Building Information Modelling Design Environment. In Advancements in Architectural, Engineering, and Construction Research and Practice; Olanrewaju, A., Bruno, S., Eds.; Advances in Science, Technology & Innovation; Springer Nature Switzerland: Cham, 2024; pp 63–79. [CrossRef]
  25. Zhuang, X.; Zhu, P.; Yang, A.; Caldas, L. Machine Learning for Generative Architectural Design: Advancements, Opportunities, and Challenges. Automation in Construction 2025, 174, 106129. [Google Scholar] [CrossRef]
  26. Li, C.; Zhang, T.; Du, X.; Zhang, Y.; Xie, H. Generative AI Models for Different Steps in Architectural Design: A Literature Review. Frontiers of Architectural Research 2025, 14, 759–783. [Google Scholar] [CrossRef]
  27. As, I.; Pal, S.; Basu, P. Artificial Intelligence in Architecture: Generating Conceptual Design via Deep Learning. International Journal of Architectural Computing 2018, 16, 306–327. [Google Scholar] [CrossRef]
  28. Cai, C.; Li, B. Training Deep Convolution Network with Synthetic Data for Architectural Morphological Prototype Classification. Frontiers of Architectural Research 2021, 10, 304–316. [Google Scholar] [CrossRef]
  29. Veloso, P.; Krishnamurti, R. An Academy of Spatial Agents - Generating Spatial Configurations with Deep Reinforcement Learning. In Proceedings of the 38th International Conference on Education and Research in Computer Aided Architectural Design in Europe (eCAADe); 2020; Vol. 2.
  30. Huang, J.; Johanes, M.; Kim, F.C.; Doumpioti, C.; Holz, G.-C. On GANs, NLP and Architecture: Combining Human and Machine Intelligences for the Generation and Evaluation of Meaningful Designs. Technology|Architecture + Design 2021, 5, 207–224. [Google Scholar] [CrossRef]
  31. Zheng, H.; Yuan, P.F. A Generative Architectural and Urban Design Method through Artificial Neural Networks. Building and Environment 2021, 205, 108178. [Google Scholar] [CrossRef]
  32. Veloso, P.; Krishnamurti, R. Self-Learning Agents for Spatial Synthesis. In Formal Methods in Architecture; Eloy, S., Leite Viana, D., Morais, F., Vieira Vaz, J., Eds.; Advances in Science, Technology & Innovation; Springer International Publishing: Cham, 2021; pp 265–276. [CrossRef]
  33. Danchenko, E. The AI-Teration Method and the Role of AI in Architectural Design. In Proceedings of the Future Technologies Conference (FTC) 2020, Volume 1; Arai, K., Kapoor, S., Bhatia, R., Eds.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, 2021; Vol. 1288, pp 525–538. [CrossRef]
  34. Wang, D.; Snooks, R. Intuitive Behavior - The Operation of Reinforcement Learning in Generative Design Processes; Hong Kong, 2021; pp 101–110. [CrossRef]
  35. Sun, C.; Zhou, Y.; Han, Y. Automatic Generation of Architecture Facade for Historical Urban Renovation Using Generative Adversarial Network. Building and Environment 2022, 212, 108781. [Google Scholar] [CrossRef]
  36. Eroğlu, R.; Gül, L.F. Architectural Form Explorations through Generative Adversarial Networks - Predicting the Potentials of StyleGAN; Ghent, Belgium, 2022; pp 575–582. [CrossRef]
  37. Zhuang, X.; Ju, Y.; Yang, A.; Luisa Caldas. Synthesis and Generation for 3D Architecture Volume with Generative Modeling. International Journal of Architectural Computing 2023, 21, 297–314. [Google Scholar] [CrossRef]
  38. Veloso, P.; Krishnamurti, R. Spatial Synthesis for Architectural Design as an Interactive Simulation with Multiple Agents. Automation in Construction 2023, 154, 104997. [Google Scholar] [CrossRef]
  39. Paananen, V.; Oppenlaender, J.; Visuri, A. Using Text-to-Image Generation for Architectural Design Ideation. International Journal of Architectural Computing 2024, 22, 458–474. [Google Scholar] [CrossRef]
  40. Chen, J.; Wang, D.; Shao, Z.; Zhang, X.; Ruan, M.; Li, H.; Li, J. Using Artificial Intelligence to Generate Master-Quality Architectural Designs from Text Descriptions. Buildings 2023, 13, 2285. [Google Scholar] [CrossRef]
  41. Li, Y.; Xu, W.; Liu, X. Research on Architectural Generation Design of Specific Architect’s Sketch Based on Image-To-Image Translation. In Hybrid Intelligence; Yuan, P.F., Chai, H., Yan, C., Li, K., Sun, T., Eds.; Computational Design and Robotic Fabrication; Springer Nature Singapore: Singapore, 2023; pp 314–325. [CrossRef]
  42. Çelik, T. The Role of Artificial Intelligence for The Architectural Plan Design: Automation in Decision-Making. In Proceedings of the 2023 8th International Conference on Machine Learning Technologies; ACM: Stockholm Sweden, 2023; pp 133–138. [CrossRef]
  43. Sebestyen, A.; Özdenizci, O.; Hirschberg, U.; Legenstein, R. Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models; Graz, Austria, 2023; pp 451–460. [CrossRef]
  44. Horvath, A.-S.; Pouliou, P. AI for Conceptual Architecture: Reflections on Designing with Text-to-Text, Text-to-Image, and Image-to-Image Generators. Frontiers of Architectural Research 2024, 13, 593–612. [Google Scholar] [CrossRef]
  45. Peng, Z.; Zhang, Y.; Lu, W.; Li, X. Data-Driven Generative Contextual Design Model for Building Morphology in Dense Metropolitan Areas. Automation in Construction 2024, 168, 105820. [Google Scholar] [CrossRef]
  46. Tono, A.; Huang, H.; Agrawal, A.; Fischer, M. Vitruvio: Conditional Variational Autoencoder to Generate Building Meshes via Single Perspective Sketches. Automation in Construction 2024, 166, 105498. [Google Scholar] [CrossRef]
  47. Wang, L.; Zhou, X.; Liu, J.; Cheng, G. Automated Layout Generation from Sites to Flats Using GAN and Transfer Learning. Automation in Construction 2024, 166, 105668. [Google Scholar] [CrossRef]
  48. Jo, H.; Lee, J.-K.; Lee, Y.-C.; Choo, S. Generative Artificial Intelligence and Building Design: Early Photorealistic Render Visualization of Façades Using Local Identity-Trained Models. Journal of Computational Design and Engineering 2024, 11, 85–105. [Google Scholar] [CrossRef]
  49. Lee, J.-K.; Yoo, Y.; Cha, S.H. Generative Early Architectural Visualizations: Incorporating Architect’s Style-Trained Models. Journal of Computational Design and Engineering 2024, 11, 40–59. [Google Scholar] [CrossRef]
  50. Shi, M.; Seo, J.; Cha, S.H.; Xiao, B.; Chi, H.-L. Generative AI-Powered Architectural Exterior Conceptual Design Based on the Design Intent. Journal of Computational Design and Engineering 2024, 11, 125–142. [Google Scholar] [CrossRef]
  51. Çelik, T. Generative Design Experiments with Artificial Intelligence: Reinterpretation of Shape Grammar. OHI 2024, 49, 822–842. [Google Scholar] [CrossRef]
  52. Eisenstadt, V.; Langenhan, C.; Bielski, J.; Bergmann, R.; Althoff, K.-D. Autocompletion of Architectural Spatial Configurations Using Case-Based Reasoning, Graph Clustering, and Deep Learning. In Case-Based Reasoning Research and Development; Recio-Garcia, J.A., Orozco-del-Castillo, M.G., Bridge, D., Eds.; Lecture Notes in Computer Science; Springer Nature Switzerland: Cham, 2024; Vol. 14775, pp 321–337. [CrossRef]
  53. Tam, H.I.; Chen, Y.; Zheng, L.; Huang, L. Research on Machine Learning-Assisted Floor Plan Generation in Old-Style Residential Buildings: Taking Tong Lau in Macau as an Example. In Proceedings of the 3rd International Conference on Computer, Artificial Intelligence and Control Engineering; ACM: Xi’ an China, 2024; pp 470–475. [CrossRef]
  54. Zhang, F.; Sun, Z.; Chen, Q. Research on Interior Intelligent Design System Based On Image Generation Technology. Procedia Computer Science 2024, 243, 690–699. [Google Scholar] [CrossRef]
  55. Chen, L.; Zhang, Y.; Zheng, Y. A Performance-Based Generative Design Framework Based on a Design Grammar for High-Rise Office Towers during Early Design Stage. Frontiers of Architectural Research 2025, 14, 145–171. [Google Scholar] [CrossRef]
  56. Zheng, H. A Diffusion-Based Machine Learning Method for 3D Architectural Form-Finding. Frontiers of Architectural Research 2025, S2095263524001791. [Google Scholar] [CrossRef]
  57. Zeng, P.; Gao, W.; Li, J.; Yin, J.; Chen, J.; Lu, S. Automated Residential Layout Generation and Editing Using Natural Language and Images. Automation in Construction 2025, 174, 106133. [Google Scholar] [CrossRef]
  58. Yang, F.; Qian, W. Generative Architectural Design from Textual Prompts: Enhancing High-Rise Building Concepts for Assisting Architects. Applied Sciences 2025, 15, 3000. [Google Scholar] [CrossRef]
  59. Li, Y.; Xu, W. A Deep Learning-Based Framework for Intelligent Modeling: From Architectural Sketch to 3D Model. Frontiers of Architectural Research 2025, S2095263525000627. [Google Scholar] [CrossRef]
  60. Kakooee, R.; Dillenburger, B. Enhancing Architectural Space Layout Design by Pretraining Deep Reinforcement Learning Agents. Journal of Computational Design and Engineering 2024, 12, 149–166. [Google Scholar] [CrossRef]
  61. Okonta, E.D.; Okeke, F.O.; Mgbemena, E.E.; Nnaemeka-Okeke, R.C.; Guo, S.; Awe, F.C.; Eke, C. An Intelligent Natural Language Processing (NLP) Workflow for Automated Smart Building Design. Buildings 2025, 15, 2413. [Google Scholar] [CrossRef]
  62. Lee, E.J.; Park, S.J. A Structured Prompt Framework for AI-Generated Biophilic Architectural Spaces. Journal of Building Engineering 2025, 111, 113326. [Google Scholar] [CrossRef]
  63. Wang, Y.; Zhu, Y.; Wang, K.; Li, X. A Hybrid Deep Learning Approach to Investigating Architectural Morphology: A Workflow Combining Graph and Image Data to Classify High-Rise Residential Building Floorplans. Journal of Building Engineering 2025, 111, 113255. [Google Scholar] [CrossRef]
Table 1. 3-Tier Indicator systems with descriptions.
Table 1. 3-Tier Indicator systems with descriptions.
Indicator 0 — Low integration 1 — Moderate integration 2 — High integration
1 Output Representation Type (ORT) Pure raster imagery (no geometry) Discrete geometry but not industry-native: topology graphs, voxel grids, meshes BIM/CAD-ready geometry: NURBS surfaces and solids, vector-based geometries, parametric families, IFC
2 Pipeline Integration (PI) ≥ 4 loosely coupled tools / manual hand-offs 2–3 tools linked via scripts or plug-ins Single platform or fully embedded plug-in—no exports/imports
3 Workflow Standardization (WS) Experimental pipeline, diverges from typical design phases Partially maps onto conventional concept / DD / CD flow Seamless fit with standard BIM/CAD + project-delivery processes
4 Tool Readiness (TR) Heavy bespoke programming essential Occasional short scripts or macros No coding required—off-the-shelf UI
5 Technical Skillset (TS) Advanced ML/DL expertise. Some grasp of model training, dataset preparation and moderate scripting. Typical architect skillset suffices
Table 2. 3-Tier scale scores for each of five indicators for 42 studies.
Table 2. 3-Tier scale scores for each of five indicators for 42 studies.
Title of Paper Name of First Author Ref No Name of Publication Year ORT PI WS TR TS
Artificial intelligence in architecture: Generating conceptual design via deep learning As [27] International Journal of Architectural Computing 2018 1 0 1 0 0
Generative Deep Learning in Architectural Design Newton [19] Technology|Architecture + Design 2019 1 0 0 0 0
Training deep convolution network with synthetic data for architectural morphological prototype classification Cai [28] Frontiers of Architectural Research 2020 0 0 0 0 0
An Academy of Spatial Agents: Generating spatial configurations with deep reinforcement learning Veloso [29] eCAADe 2020 1 0 1 0 0
On GANs, NLP and Architecture: Combining Human and Machine Intelligences for the Generation and Evaluation of Meaningful Designs Huang [30] Technology|Architecture + Design 2021 0 0 0 0 0
A generative architectural and urban design method through artificial neural networks Zheng [31] Building and Environment 2021 2 1 1 0 0
Self-learning Agents for Spatial Synthesis Veloso [32] Formal Methods in Architecture 2021 1 1 1 0 0
The AI-teration Method and the Role of AI in Architectural Design Danchenko [33] Proceedings of the Future Technologies Conference 2021 0 0 1 0 0
Intuitive Behavior: The Operation of Reinforcement Learning in Generative Design Processes Wang [34] CAADRIA 2021 1 0 1 0 0
Automatic generation of architecture façade for historical urban renovation using generative adversarial network Sun [35] Building and Environment 2022 0 0 0 0 0
Architectural Form Explorations through Generative Adversarial Networks Eroglu [36] eCAADe 2022 0 0 1 1 1
Design across multi-scale datasets by developing a novel approach to 3DGANs. Ennemoser [21] International Journal of Architectural Computing 2023 1 1 0 0 0
Speculative hybrids: Investigating the generation of Conceptual architectural forms through the use of 3D generative adversarial networks Pouliou [22] International Journal of Architectural Computing 2023 1 1 1 0 0
Synthesis and generation for 3D architecture volume with generative modeling. Zhuang [37] International Journal of Architectural Computing 2023 1 1 0 0 0
Spatial synthesis for architectural design as an interactive simulation with multiple agents Veloso [38] Automation in Construction 2023 1 1 1 0 0
Using text-to-image generation for architectural design ideation Paananen [39] International Journal of Architectural Computing 2023 0 2 1 2 2
Using Artificial Intelligence to Generate Master-Quality Architectural Designs from Text Descriptions Chen [40] Buildings 2023 0 2 1 2 2
Research on Architectural Generation Design of Specific Architect's Sketch Based on Image-To-Image Translation Li [41] Hybrid Intelligence, Computational Design and Robotic Fabrication 2023 0 2 0 0 0
The Role of Artificial Intelligence for The Architectural Plan Design: Automation in Decision-making Celik [42] Proceedings of the 8th International Conference on Machine Learning Technologies 2023 0 0 1 2 2
Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models Showcasing a deep learning based 3D generative prototype. Sebestyen [43] eCAADe 2023 1 0 1 0 0
AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators Horvath [44] Frontiers of Architectural Research 2024 0 0 0 0 0
Data-driven generative contextual design model for building morphology in dense metropolitan areas Peng [45 ] Automation in Construction 2024 1 1 2 1 0
Vitruvio: Conditional variational autoencoder to generate building meshes via single perspective sketches Tono [46] Automation in Construction 2024 2 1 1 0 0
Automated layout generation from sites to flats using GAN and transfer learning Wang [47] Automation in Construction 2024 2 1 2 1 0
Generative artificial intelligence and building design: early photorealistic render visualization of façades using local identity-trained models Jo [48] Journal of Computational Design and Engineering 2024 0 2 1 1 1
Generative early architectural visualizations: incorporating architect’s style-trained models Lee [49] Journal of Computational Design and Engineering 2024 0 2 1 1 1
Generative AI-powered architectural exterior conceptual design based on the design intent Shi [50] Journal of Computational Design and Engineering 2024 0 1 1 1 1
Generative design experiments with artificial intelligence: reinterpretation of shape grammar Celik [51] Open House International 2024 0 0 1 2 2
SketchPLAN Recognition and Vectorization of Floor Plan Sketches for Building Information Modelling Design Environment Abdelmoula [24] Advancements in Architectural, Engineering, and Construction Research and Practice 2024 2 0 2 1 0
Autocompletion of Architectural Spatial Configurations Using Case-Based Reasoning, Graph Clustering, and Deep Learning Eisenstadt [52] Case-Based Reasoning Research and Development 2024 1 0 1 0 0
Using Generative Adversarial Networks to Create 3D Building Geometries Mueller [23] eCAADe 2024 1 1 1 0 0
Research on Machine Learning-assisted Floor Plan Generation in Old-style Residential Buildings: Taking Tong Lau in Macau as an Example Tam [53] Proceedings of the 3rd International Conference on Computer, Artificial Intelligence and Control Engineering 2024 0 0 1 0 0
Research on Interior Intelligent Design System Based on Image Generation Technology Zhang [54] The 4th International Conference on Machine Learning and Big Data Analytics for IoT Security and Privacy 2024 0 2 1 0 1
A performance-based generative design framework based on a design grammar for high-rise office towers during early design stage Chen [55] Frontiers of Architectural Research 2025 2 1 2 1 0
A diffusion-based machine learning method for 3D architectural form-finding Zheng [56] Frontiers of Architectural Research 2025 1 1 1 1 0
Automated residential layout generation and editing using natural language and images Zeng [57] Automation in Construction 2025 1 1 1 0 1
Generative Architectural Design from Textual Prompts: Enhancing High-Rise Building Concepts for Assisting Architects Yang [58] Applied Sciences 2025 1 1 1 1 1
A deep learning-based framework for intelligent modeling: From architectural sketch to 3D model Li [59] Frontiers of Architectural Research 2025 1 0 1 1 0
Enhancing architectural space layout design by pretraining deep reinforcement learning agents Kakooee [60] Journal of Computational Design and Engineering 2025 1 1 1 0 0
An Intelligent Natural Language Processing (NLP) Workflow for Automated Smart Building Design Okonta [61] Buildings 2025 2 2 2 0 0
A structured prompt framework for AI generated biophilic architectural spaces Lee [62] Journal of Building Engineering 2025 0 1 1 0 0
A hybrid deep learning approach to investigating architectural morphology: A workflow combining graph and image data to classify high-rise residential building floorplans Wang [63] Journal of Building Engineering 2025 1 0 1 0 0
Table 3. ORT score and rationale for each study.
Table 3. ORT score and rationale for each study.
Title of Paper Name of First Author ORT Rationale ORT score
Artificial intelligence in architecture: Generating conceptual design via deep learning As Topology graphs of rooms and adjacencies visualized as 2-D plan drawings. Discrete geometry useful for analysis but not BIM/CAD-ready. 1
Generative Deep Learning in Architectural Design Newton 2-D raster images (plans & façades) and 3-D voxel massings -useful discrete geometry but not BIM / CAD ready. 1
Training deep convolution network with synthetic data for architectural morphological prototype classification Cai Image-processing pipeline. Output is classification labels for 2D spatial prototypes, derived from raster image inputs. 0
An Academy of Spatial Agents: Generating spatial configurations with deep reinforcement learning Veloso Spatial configuration extruded from a grid/graph-based agent system. Output represents spatial configurations and three dimensional extrusions. 1
On GANs, NLP and Architecture: Combining Human and Machine Intelligences for the Generation and Evaluation of Meaningful Designs Huang 2-D raster images; no CAD/BIM-native geometry. 0
A generative architectural and urban design method through artificial neural networks Zheng CAD/BIM-ready ouput; 3D NURBS-based vector geometries, structured via control points and convertible to parametric surfaces. 2
Self-learning Agents for Spatial Synthesis Veloso Ouput is grid-based polyomino spatial partitions. Discrete geometric outputs that can represent diagrams and early space plans, but not BIM-native solids or vector geometries. 1
The AI-teration Method and the Role of AI in Architectural Design Danchenko Mood-boards as 2D raster image outputs (JPGs/PNGs). Although images are converted into multi-dimensional vectors, this is only for comparison and selection. 0
Intuitive Behavior: The Operation of Reinforcement Learning in Generative Design Processes Wang The RL agent produces a mesh-based topology field. No CAD/ BIM elements are generated. 1
Automatic generation of architecture façade for historical urban renovation using generative adversarial network Sun 2D raster façade images (with 3-D massing or CAD geometry created later manually) 0
Architectural Form Explorations through Generative Adversarial Networks Eroglu 2D raster images only; no downstream conversion to vector, mesh, BIM or voxels. 0
Design across multi-scale datasets by developing a novel approach to 3DGANs. Ennemoser GAN results reconstructed as voxel-derived polygon meshes/SDF surfaces. Not just 2D rasters, but geometry is not CAD/BIM native (conversion needed). 1
Speculative hybrids: Investigating the generation of Conceptual architectural forms through the use of 3D generative adversarial networks Pouliou Point-cloud/SDF-based polygon meshes: richer than 2-D rasters yet geometry is not CAD/BIM native (conversion needed). 1
Synthesis and generation for 3D architecture volume with generative modeling. Zhuang Voxel grid or SDF-derived polygon meshes that capture overall massing but not CAD/BIM native (conversion needed). 1
Spatial synthesis for architectural design as an interactive simulation with multiple agents Veloso Polyominoes on a square grid, then passed to a Rhino/Grasshopper parametric script for NURBS solids. 1
Using text-to-image generation for architectural design ideation Paananen 2-D raster images; no CAD/BIM-native geometry. 0
Using Artificial Intelligence to Generate Master-Quality Architectural Designs from Text Descriptions Chen 2-D raster images; no CAD/BIM-native geometry. 0
Research on Architectural Generation Design of Specific Architect's Sketch Based on Image-To-Image Translation Li 2-D raster images; no CAD/BIM-native geometry. 0
The Role of Artificial Intelligence for The Architectural Plan Design: Automation in Decision-making Celik 2-D raster images; no CAD/BIM-native geometry. 0
Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models Showcasing a deep learning based 3D generative prototype. Sebestyen The model denoises a 32x32x32 density-voxel grid and isosurfaces are extracted to triangular meshes in Houdini. No parametric/BIM geometry is produced. 1
AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators Horvath 2-D raster images; CAD plugins (Grasshopper/Monoceros) only used as a separate workflow 0
Data-driven generative contextual design model for building morphology in dense metropolitan areas Peng Output is voxel-height matrix (voxel mass model). 1
Vitruvio: Conditional variational autoencoder to generate building meshes via single perspective sketches Tono Watertight triangular mesh in USD that can be imported directly into CAD/BIM tools. 2
Automated layout generation from sites to flats using GAN and transfer learning Wang Regularised meshes converted in Grasshopper to IFC-compatible BIM geometry, ready for direct editing in Revit/ArchiCAD. 2
Generative artificial intelligence and building design: early photorealistic render visualization of façades using local identity-trained models Jo 2-D raster images; no CAD/BIM-native geometry. 0
Generative early architectural visualizations: incorporating architect’s style-trained models Lee 2-D raster images; no CAD/BIM-native geometry. 0
Generative AI-powered architectural exterior conceptual design based on the design intent Shi 2-D raster images; no CAD/BIM-native geometry. 0
Generative design experiments with artificial intelligence: reinterpretation of shape grammar Celik 2-D raster images; no CAD/BIM-native geometry. 0
SketchPLAN Recognition and Vectorization of Floor Plan Sketches for Building Information Modelling Design Environment Abdelmoula Although the process starts with images of sketches, the output is editable BIM elements. 2
Autocompletion of Architectural Spatial Configurations Using Case-Based Reasoning, Graph Clustering, and Deep Learning Eisenstadt The system completes graph-based floor-plan topologies (rooms as nodes, connections as edges). 1
Using Generative Adversarial Networks to Create 3D Building Geometries Mueller The GAN outputs watertight triangular meshes (OBJ) generated from 64×64×64 occupancy grids. Can be imported to CAD/BIM environments but not parametric geometry. 1
Research on Machine Learning-assisted Floor Plan Generation in Old-style Residential Buildings: Taking Tong Lau in Macau as an Example Tam Three image output sets as 2D raster images (512 × 512 PNGs). No vector, mesh or BIM geometry is produced. 0
Research on Interior Intelligent Design System Based On Image Generation Technology Zhang 2D raster images (Stable Diffusion renderings as PNG/JPG). No vector, mesh or BIM elements are produced. 0
A performance-based generative design framework based on a design grammar for high-rise office towers during early design stage Chen Editable NURBS/mesh geometry; Rhino/Grasshopper ready solids which can then be directly downstreamed to BIM. 2
A diffusion-based machine learning method for 3D architectural form-finding Zheng Triangulated mesh derived from heat-maps (height-field imagery) which can be re-imported to Rhino/Grasshopper for further editing. Although usable, still needs conversion for BIM workflows. 1
Automated residential layout generation and editing using natural language and images Zheng 2D raster images then converted to mesh models, yet they are not parametric or BIM objects. 1
Generative Architectural Design from Textual Prompts: Enhancing High-Rise Building Concepts for Assisting Architects Yang Concept sketches and photorealistic images (one example rebuilt as a triangulated 3-D mass model). 1
A deep learning-based framework for intelligent modeling: From architectural sketch to 3D model Li Polygon meshes. Although later refined into NURBS solids the generate outcome is not BIM/CAD native 1
Enhancing architectural space layout design by pretraining deep reinforcement learning agents Kakooee Layouts are stored as a 21 × 21 voxel / occupancy grid and visualised as 2-D plan images; the grid can be converted to polygons, but the framework does not yet emit CAD/BIM-native geometry. 1
An Intelligent Natural Language Processing (NLP) Workflow for Automated Smart Building Design Okonta BIM-native elements generated through NLP via CAD/BIM APIs . 2
A structured prompt framework for AI generated biophilic architectural spaces Lee 2D raster images (Stable-Diffusion renderings).No vector, mesh or BIM elements are produced. 0
A hybrid deep learning approach to investigating architectural morphology: A workflow combining graph and image data to classify high-rise residential building floorplans Wang Floor-plan raster images are converted into topological graphs for GNN processing. No editable CAD/BIM geometry produced. 1
Table 4. No and % distribution of studies for each Tier score for ORT indicator.
Table 4. No and % distribution of studies for each Tier score for ORT indicator.
Tier Representation type Papers (no) % Distribution
0 Raster images 17 40 %
1 Mesh / voxel / graph 19 45 %
2 CAD/BIM-native 6 15 %
Table 5. PI score and rationale for each study.
Table 5. PI score and rationale for each study.
Title of Paper Name of First Author PI Rationale PI score
Artificial intelligence in architecture: Generating conceptual design via deep learning As Revit; Revit-API extraction; NetworkX; DNN; TensorFlow GAN code; separate visualisation routines. That is ≥ 4 loosely coupled tools with manual hand-offs. 0
Generative Deep Learning in Architectural Design Newton CAD downloads; custom Python voxel converter; TensorFlow/Keras GAN training; separate visualisation. That is ≥ 4 loosely coupled tools with manual hand-offs. 0
Training deep convolution network with synthetic data for architectural morphological prototype classification Cai Python/Mathematic custom code; LeNet in a bespoke training loop; Synthetic dataset generators; CNNs - image pre-processing. More than four stages with manual hand-offs. 0
An Academy of Spatial Agents: Generating spatial configurations with deep reinforcement learning Veloso Custom Python/PyTorch DDQN; separate modelling software for extrusions. No unified platform. 0
On GANs, NLP and Architecture: Combining Human and Machine Intelligences for the Generation and Evaluation of Meaningful Designs Huang Custom Python scripts; TensorFlow/Colab for GAN training; manual latent-space GUI; separate NLP pipelines; external CAD software for 3-D reconstruction. That is ≥ 4 loosely coupled tools with manual hand-offs. 0
A generative architectural and urban design method through artificial neural networks Zheng Rhino (for modeling and control point extraction); custom Python/TensorFlow code for the ANN and vector encoding. Moderately integrated 2–3 tools in the pipeline. 1
Self-learning Agents for Spatial Synthesis Veloso Custom deep RL; encoded Python-based multi-agent systems. No CAD integration, but workflow stays within one or two platforms. 1
The AI-teration Method and the Role of AI in Architectural Design Danchenko TensorFlow/Keras with CNN for classification; Runway ML software for StyleGAN training; python script for vectorization; python ANNOY library for selection. At least four independent environments and manual hand-offs. 0
Intuitive Behavior: The Operation of Reinforcement Learning in Generative Design Processes Wang Unity ML-Agents for training; custom Python/VEX scripts; modelling software for visualisation. That is more than three environments with manual or ad-hoc hand-offs. 0
Automatic generation of architecture façade for historical urban renovation using generative adversarial network Sun Photoshop rectification/labelling; custom Python/TensorFlow CycleGAN training on GPU; CAD software for applying images. Workflow spans ≥ 4 loosely coupled stages with multiple hand-offs 0
Architectural Form Explorations through Generative Adversarial Networks Eroglu StyleGAN run in Google Colab; no coupling with CAD/BIM software. 0
Design across multi-scale datasets by developing a novel approach to 3DGANs. Ennemoser Python script for voxel grid; TensorFlow for DCGAN training; custom SDF post-processor; external modeller for inspection. That is 2–3 tightly scripted stages—more integrated than hand-off pipelines, yet still multi-tool. 1
Speculative hybrids: Investigating the generation of Conceptual architectural forms through the use of 3D generative adversarial networks Pouliou Rhino modelling; Cockroach point-cloud exporter; DLNest 3DGAN training; Python post-filter; external viewer. That is 2-3 tightly scripted tools, partially integrated. 1
Synthesis and generation for 3D architecture volume with generative modeling. Zhuang OBJ-to-voxel/SDF preprocessors; Python/TensorFlow auto-decoder & GAN training; external viewers. That is 2–3 scripted stages with some integration, but still multi-tool. 1
Spatial synthesis for architectural design as an interactive simulation with multiple agents Veloso Custom Python simulation/PPO-RL training; Rhino/Grasshopper for live visualisation. Two main integrated environments but not seamless. 1
Using text-to-image generation for architectural design ideation Paananen Single web-based GUI (Midjourney, DALL-E, Stable Diffusion) 2
Using Artificial Intelligence to Generate Master-Quality Architectural Designs from Text Descriptions Chen Custom diffusion model using Pytorch and Dreambooth 2
Research on Architectural Generation Design of Specific Architect's Sketch Based on Image-To-Image Translation Li Manual sketch capture; image pre-processing; Python/TensorFlow CycleGAN training and generation. 2
The Role of Artificial Intelligence for The Architectural Plan Design: Automation in Decision-making Celik Three separate GenAI tools (Midjourney, DALL-E 2, Craiyon). 0
Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models Showcasing a deep learning based 3D generative prototype. Sebestyen Houdini parametric dataset classification; custom Python/PyTorch diffusion training; Houdini mesh clean-up. Three distinct environments with manual hand-offs. 0
AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators Horvath TensorFlow-Google Colab; VQGAN+CLIP; StyleGAN-ADA 0
Data-driven generative contextual design model for building morphology in dense metropolitan areas Peng Rhino/Grasshopper for geometric feature extraction; Python/TensorFlow for VAE training; multivariate Random-forest. Three distinct steps. 1
Vitruvio: Conditional variational autoencoder to generate building meshes via single perspective sketches Tono Trained deep learning conditional VAE model through Pytorch; Sketching front-end; modeller/BIM for mesh post-processing. 1
Automated layout generation from sites to flats using GAN and transfer learning Wang Python/TensorFlow for GAN inference; Rhino/Grasshopper for regularization of pixel boundaries; BIM 3D model into vector models. Tightly linked tools with scripted hand-off. 1
Generative artificial intelligence and building design: early photorealistic render visualization of façades using local identity-trained models Jo Single Stable-Diffusion-based generative-AI framework 2
Generative early architectural visualizations: incorporating architect’s style-trained models Lee Single Stable-Diffusion-based GUI or WebUI. 2
Generative AI-powered architectural exterior conceptual design based on the design intent Shi Single generative AI framework (Stable-Diffusion/ControlNet). Separate Python scripts for data scraping, LoRA training, and inference. That is 2-3 tightly scripted components. 1
Generative design experiments with artificial intelligence: reinterpretation of shape grammar Celik Midjourney; DALL-E 2; Craiyon; Stable Diffusion; NightCafe. Separate experiments but single text-to-image platform for each. 0
SketchPLAN Recognition and Vectorization of Floor Plan Sketches for Building Information Modelling Design Environment Abdelmoula Custom Python (TensorFlow/Keras) for cGAN training; Pix2pix image recognition and segmeration; in-house vectorisation Python library; Rhino3dm/Hops for Rhino curves conversion; Grasshopper/Rhino.Inside bridge to Revit. Requires at least four separate environments. 0
Autocompletion of Architectural Spatial Configurations Using Case-Based Reasoning, Graph Clustering, and Deep Learning Eisenstadt Python case-based reasoning; Girvan–Newman clustering; link prediction GNN; rule-based Consistency Checker; custom UI. At least four distinct components stitched by custom scripts. 0
Using Generative Adversarial Networks to Create 3D Building Geometries Mueller Python GAN training/inference; MeshLab; Blender/Rhino viewing. Three steps, but hand-off is scripted (OBJ export). 1
Research on Machine Learning-assisted Floor Plan Generation in Old-style Residential Buildings: Taking Tong Lau in Macau as an Example Tam Manual image editing to colour-code plans; python/PyTorch for cGAN training; optional viewer for result inspection. At least three loosely-coupled tools with manual hand-offs. 0
Research on Interior Intelligent Design System Based On Image Generation Technology Zhang Work runs inside Stable Diffusion ComfyUI node-graph GUI. 2
A performance-based generative design framework based on a design grammar for high-rise office towers during early design stage Chen Rhino/Grasshopper/GHPython; EnergyPlus for simulation; Python ANN for prediction; Wallacei for multi-objective optimisation. That is three tightly scripted stages. 1
A diffusion-based machine learning method for 3D architectural form-finding Zheng LoRA/Stable Diffusion for heat map image generation; Rhino/Grasshopper for meshing; ControlNet for rendering. 1
Automated residential layout generation and editing using natural language and images Zheng Custom trained modules and generators (RL-Net, WD-Net, 3-D renderer) which look like an integrated workflow, but technically a multi-step toolchain. 1
Generative Architectural Design from Textual Prompts: Enhancing High-Rise Building Concepts for Assisting Architects Yang ChatGPT; DSTF-GAN; Stable Diffusion; SketchUp and Rhino for meshing. 1
A deep learning-based framework for intelligent modeling: From architectural sketch to 3D model Li Stable Diffusion; CycleGAN; Pixel2Mesh; Rhino/Grasshopper; GH plugins. That is more that 5 tools. Fragmented tool-chain. 0
Enhancing architectural space layout design by pretraining deep reinforcement learning agents Kakooee Single custom Python scripted environment with Matplotlib viewer. Manual export Rhino or Revit. 1
An Intelligent Natural Language Processing (NLP) Workflow for Automated Smart Building Design Okonta Tightly coupled tools. NLP engine; middleware (Autodesk Forge/Dynamo) for NLP output translation into CAD scripts or API-compatible commands; APIs for BIM/CAD plarforms (Revit, AutoCAD). 2
A structured prompt framework for AI generated biophilic architectural spaces Lee ChatGPT (prompt drafting); Python text-mining notebooks; Stable-Diffusion XL. 1
A hybrid deep learning approach to investigating architectural morphology: A workflow combining graph and image data to classify high-rise residential building floorplans Wang Space-syntax topological graphs; DepthmapX (VGA & agent analysis); manual diagramming (Illustrator/AutoCAD); custom Python/PyTorch pipeline. Manual hand-offs between more than four distinct environments. 0
Table 6. No and % distribution of studies for each Tier score for the PI indicator.
Table 6. No and % distribution of studies for each Tier score for the PI indicator.
Tier Representation type Papers (no) % Distribution
0 ≥ 4 loosely coupled tools / manual hand-offs 18 43 %
1 2–3 tools linked via scripts or plug-ins 17 40 %
2 Single platform or fully embedded plug-in—no exports/imports 6 17 %
Table 7. WS score and rationale for each study.
Table 7. WS score and rationale for each study.
Title of Paper Name of First Author WS Rationale WS score
Artificial intelligence in architecture: Generating conceptual design via deep learning As Experimental pipeline for early-phase conceptual design for layout topology exploration. Lightly plugs into typical SD phase. 1
Generative Deep Learning in Architectural Design Newton GANs are used as experimental aids for precedent analysis and concept/ideation, not integrated into conventional SD/DD workflows. 0
Training deep convolution network with synthetic data for architectural morphological prototype classification Cai Entire process is limited to morphological classification. It does not feed into standard design phases, nor does it produce design drawings, models, or construction-related information. 0
An Academy of Spatial Agents: Generating spatial configurations with deep reinforcement learning Veloso Outputs are interactive bubble diagrams / early space planning aids. Can be further processed for early SD stage. 1
On GANs, NLP and Architecture: Combining Human and Machine Intelligences for the Generation and Evaluation of Meaningful Designs Huang Experimental ideation aid detached from typical design workflows. 0
A generative architectural and urban design method through artificial neural networks Zheng Aimed at early-stage form-finding; it is not tied to conventional workflows or regulatory BIM systems. Yet, it uses parametric representations that map reasonably to actual design constraints. 1
Self-learning Agents for Spatial Synthesis Veloso Supports early-stage diagrammatic layout and adjacency planning but does not engage with later design phases, or the production of documentation-ready drawings. 1
The AI-teration Method and the Role of AI in Architectural Design Danchenko Image output for early concept ideation. No integration into SD stage without re-work. 1
Intuitive Behavior: The Operation of Reinforcement Learning in Generative Design Processes Wang Output functions as concept stage massing generator. For SD/DD phase results must be remodelled. 1
Automatic generation of architecture façade for historical urban renovation using generative adversarial network Sun Early-stage ideation aid for heritage stylistic studies, detached from typical design workflows. 0
Architectural Form Explorations through Generative Adversarial Networks Eroglu Early-stage image production for form-finding and inspiration; no direct link to established design, modelling or documentation phases. 1
Design across multi-scale datasets by developing a novel approach to 3DGANs. Ennemoser Aimed at speculative form-finding. Outputs lack dimensional control, codes, or documentation ties. 0
Speculative hybrids: Investigating the generation of Conceptual architectural forms through the use of 3D generative adversarial networks Pouliou Incorporates basic site metrics so generated masses respect site rules, but it stops at conceptual form-finding. 1
Synthesis and generation for 3D architecture volume with generative modeling. Zhuang Aimed at early-concept form exploration; no links to site metrics and documentation datasets. 0
Spatial synthesis for architectural design as an interactive simulation with multiple agents Veloso Conceptual-layout form-finding, but the grid discretization and agent logic still diverge from typical CAD/BIM workflow. 1
Using text-to-image generation for architectural design ideation Paananen Fits well with early-stage conceptual brainstorming; no dimensioning, site metrics or code checks for downstream documentation. 1
Using Artificial Intelligence to Generate Master-Quality Architectural Designs from Text Descriptions Chen Fits well with early ideation / mood-board phases; yet no link to dimensioned CAD, or construction documentation. 1
Research on Architectural Generation Design of Specific Architect's Sketch Based on Image-To-Image Translation Li Aimed solely at early-stage ideation (turning sketches into illustrative images). It does not connect to SD/DD/CD workflows, dimensioning, or compliance checks. 0
The Role of Artificial Intelligence for The Architectural Plan Design: Automation in Decision-making Celik Concept / ideation phase for plan-layout brainstorming. Links to SD phase. 1
Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models Showcasing a deep learning based 3D generative prototype. Sebestyen Outputs are abstract massings useful for early form-finding; they must be remodelled for SD/DD or BIM phases. 1
AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators Horvath Purely experimental / speculative research workflow 0
Data-driven generative contextual design model for building morphology in dense metropolitan areas Peng Inputs are real world parameters and constraints. Outputs are early-stage massing options. A core everyday task in schematic design. 2
Vitruvio: Conditional variational autoencoder to generate building meshes via single perspective sketches Tono Automates “sketch-to-mass” translation—useful in concept design—but further manual refinement is needed for DD/CD deliverables. 1
Automated layout generation from sites to flats using GAN and transfer learning Wang Inputs (site boundary) and outputs (site massing, cores, flat layouts, BIM model) map directly onto common schematic-design and code-study tasks in mainstream workflows. 2
Generative artificial intelligence and building design: early photorealistic render visualization of façades using local identity-trained models Jo The images support early design communication, replacing quick sketches or mood boards, but they do not plug directly into downstream BIM / documentation stages 1
Generative early architectural visualizations: incorporating architect’s style-trained models Lee Concept-sketch / mood-board phase which sets stylistic direction. Outputs must be remodelled for SD, DD or CD stages. 1
Generative AI-powered architectural exterior conceptual design based on the design intent Shi Early concept / mood-board stage based on converting design intent into façade imagery, but outputs must be remodelled for SD, DD or CD phases. 1
Generative design experiments with artificial intelligence: reinterpretation of shape grammar Celik Concept-stage mood boards for plan-layout studies. Not directly usable in SD, DD or CD phases without complete remodelling. 1
SketchPLAN Recognition and Vectorization of Floor Plan Sketches for Building Information Modelling Design Environment Abdelmoula Output lands inside Revit with correct wall types, doors, windows and scale, so the same model can continue through SD, detailing and coordination. 2
Autocompletion of Architectural Spatial Configurations Using Case-Based Reasoning, Graph Clustering, and Deep Learning Eisenstadt Outputs support early conceptual layout (graph-based plan autocompletion) but must be redrawn for SD phases. 1
Using Generative Adversarial Networks to Create 3D Building Geometries Mueller Meshes are useful for early massing / form-finding but require remodelling for SD and BIM phases. 1
Research on Machine Learning-assisted Floor Plan Generation in Old-style Residential Buildings: Taking Tong Lau in Macau as an Example Tam Generated plans are appropriate for early design concept and brainstorming phase. They must be redrawn and vectorized for SD and further DD stages. 1
Research on Interior Intelligent Design System Based On Image Generation Technology Zhang Output used for early concept / mood-board work in interior design. Results must be remodelled for SD/DD stages. 1
A performance-based generative design framework based on a design grammar for high-rise office towers during early design stage Chen Targets schematic high-rise massing + energy/comfort code studies, a routine early-design task. Workflow maps to real world deliverables. 2
A diffusion-based machine learning method for 3D architectural form-finding Zheng Concept-mass exploration that links with SD phase. Yet, outputs need further modelling for DD and CD. 1
Automated residential layout generation and editing using natural language and images Zheng The system outputs conventional architectural representations (floor plans, 3-D massing) that fit the early-concept phase. Yet not directly embedded in mainstream CAD/BIM framework for DD/CD stages. 1
Generative Architectural Design from Textual Prompts: Enhancing High-Rise Building Concepts for Assisting Architects Yang Fits the very early concept phase (rapid images and massing ideas) but does not feed directly into CAD drafting workflow; manual remodelling required. 1
A deep learning-based framework for intelligent modeling: From architectural sketch to 3D model Li Framework that mirrors concept stage / SD / DD. Yet, AI dependence still departs from conventional CAD/BIM delivery. 1
Enhancing architectural space layout design by pretraining deep reinforcement learning agents Kakooee The RL agent automates the schematic space-planning stage (room sizing & adjacency but hand-off to DD/CD still requires redrawing or scripting. 1
An Intelligent Natural Language Processing (NLP) Workflow for Automated Smart Building Design Okonta NLP extracted JSON data as input to standard Revit/AutoCAD APIs. Easy to slot into BIM processes for further downstream design development. 2
A structured prompt framework for AI generated biophilic architectural spaces Lee Outputs serve the early concept / mood-board stage (visual ideation). They are not usable in SD/DD without re-modelling. 1
A hybrid deep learning approach to investigating architectural morphology: A workflow combining graph and image data to classify high-rise residential building floorplans Wang Floor-plan classification and typological reasoning. But outside mainstream CAD/BIM workflow. Partial conceptual SD alignment. 1
Table 8. No and % distribution of studies for each Tier score for the WS indicator.
Table 8. No and % distribution of studies for each Tier score for the WS indicator.
Tier Representation type Papers (no) % Distribution
0 Experimental pipeline, diverges from typical design phases 8 19 %
1 Partially maps onto conventional concept → DD → CD flow 29 69 %
2 Seamless fit with standard BIM/CAD + project-delivery processes 5 12 %
Table 9. TR score and rationale for each study.
Table 9. TR score and rationale for each study.
Title of Paper Name of First Author TR Rationale TR score
Artificial intelligence in architecture: Generating conceptual design via deep learning As Bespoke Python scripts, graph-mining algorithms, GAN training code. No off-the-shelf UI. 0
Generative Deep Learning in Architectural Design Newton Bespoke Python scripts, custom GAN training code. No off-the-shelf UI. 0
Training deep convolution network with synthetic data for architectural morphological prototype classification Cai Custom Mathematica/Python scripts, with modified LeNet architecture, synthetic sample generation, and filtering. No off-the-shelf UI. 0
An Academy of Spatial Agents: Generating spatial configurations with deep reinforcement learning Veloso Python scripts for multi-agent DDQN and CNN training. Custom state encodings and Python post-processing. 0
On GANs, NLP and Architecture: Combining Human and Machine Intelligences for the Generation and Evaluation of Meaningful Designs Huang Bespoke scripts for dataset curation, GAN parameter tuning, latent interpolation, and NLP analytics. No off-the-shelf UI. 0
A generative architectural and urban design method through artificial neural networks Zheng Custom code (Python, TensorFlow), bespoke ANN architecture with customized input/output vectors and training workflows. No off-the-shelf UI. 0
Self-learning Agents for Spatial Synthesis Veloso Custom-coded system, RL framework, polyomino partition engine, spatial reasoning logic, and interaction models. No off-the-shelf UI. 0
The AI-teration Method and the Role of AI in Architectural Design Danchenko Bespoke Python scripts for training the CNN classifier, building the StyleGAN dataset, vectorising images and clustering. 0
Intuitive Behavior: The Operation of Reinforcement Learning in Generative Design Processes Wang Requires custom RL policy, reward functions, mesh-agent behaviours, VDB voxelisation, and post-processing scripts. 0
Automatic generation of architecture façade for historical urban renovation using generative adversarial network Sun Bespoke Python scripts for data augmentation, CycleGAN training. No off-the-shelf UI. 0
Architectural Form Explorations through Generative Adversarial Networks Eroglu Some dataset-curation scripts and minor edits to the public StyleGAN repository were needed (image preprocessing, training loops). No purpose-built architectural plug-ins or GUIs. 1
Design across multi-scale datasets by developing a novel approach to 3DGANs. Ennemoser Bespoke Python script required for voxel to pixel encoder, GAN tweaks and SDF reconstruction. No off-the-shelf UI. 0
Speculative hybrids: Investigating the generation of Conceptual architectural forms through the use of 3D generative adversarial networks Pouliou Bespoke scripts for point-cloud labelling, GAN training and constraint filtering needed. No off-the-shelf UI. 0
Synthesis and generation for 3D architecture volume with generative modeling. Zhuang Bespoke scripts for dataset construction, voxelisation/SDF sampling, hyper-parameter tuning, and latent-space exploration. No off-the-shelf UI. 0
Spatial synthesis for architectural design as an interactive simulation with multiple agents Veloso Custom Python code. No off-the-shelf UI. 0
Using text-to-image generation for architectural design ideation Paananen Entirely off-the-shelf; users simply type prompts. 2
Using Artificial Intelligence to Generate Master-Quality Architectural Designs from Text Descriptions Chen End-users need no coding—just prompts. 2
Research on Architectural Generation Design of Specific Architect's Sketch Based on Image-To-Image Translation Li Bespoke training scripts and parameter tuning; no off-the-shelf CAD/BIM platforms. 0
The Role of Artificial Intelligence for The Architectural Plan Design: Automation in Decision-making Celik Off-the-shelf text-to-image tools; no coding or custom scripting needed. 2
Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models Showcasing a deep learning based 3D generative prototype. Sebestyen Requires custom dataset generator, voxel converter, diffusion training scripts and inference notebooks. 0
AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators Horvath Extensive bespoke scripts and model-training required 0
Data-driven generative contextual design model for building morphology in dense metropolitan areas Peng Custom Grasshopper scripts. Also custom VAE and multivariate Random-forest code required; no turnkey plug-in provided. 1
Vitruvio: Conditional variational autoencoder to generate building meshes via single perspective sketches Tono Python script required, a trained conditional VAE, dataset generation. 0
Automated layout generation from sites to flats using GAN and transfer learning Wang End-users run a supplied Grasshopper definition and pretrained checkpoints, but training / fine-tuning still relies on bespoke Python scripts. 1
Generative artificial intelligence and building design: early photorealistic render visualization of façades using local identity-trained models Jo Training dataset and network weights adjustments demanded (scripting and GPU training). When the checkpoint is made, end-users mostly prompt without coding. 1
Generative early architectural visualizations: incorporating architect’s style-trained models Lee A basic LoRA fine-tune script (Python, GPU) required. Beyond that workflow is no-code. 1
Generative AI-powered architectural exterior conceptual design based on the design intent Shi A basic LoRA fine-tune script (Python, GPU) and ControlNet inference scripts required. Not packaged as a plug-and-play add-in. 1
Generative design experiments with artificial intelligence: reinterpretation of shape grammar Celik Prompting only needed. No fine-tuning, Python scripting or API integration required. Results are generated with off-the-shelf platforms. 2
SketchPLAN Recognition and Vectorization of Floor Plan Sketches for Building Information Modelling Design Environment Abdelmoula Custom CNN training, bespoke dataset, vectorisation library. Off-the self tools include Rhino/Grassopper, Rhino.Inside, Hops, Revit. 1
Autocompletion of Architectural Spatial Configurations Using Case-Based Reasoning, Graph Clustering, and Deep Learning Eisenstadt Requires custom case-based reasoning, clustering code, GNN training scripts, rule engine etc. 0
Using Generative Adversarial Networks to Create 3D Building Geometries Mueller Core relies on a custom 3D IWGAN using Wasserstein loss with gradient penalty implemented in PyTorch; users must run training scripts and tweak hyper-parameters. 0
Research on Machine Learning-assisted Floor Plan Generation in Old-style Residential Buildings: Taking Tong Lau in Macau as an Example Tam End-to-end operation depends on custom PyTorch notebooks, dataset-building scripts and image-pre-processing macros. No plug-and-play add-in is provided. 0
Research on Interior Intelligent Design System Based On Image Generation Technology Zhang Custom Python node (Voronoi node) and adjust Stable Diffusion checkpoints/LoRAs—requires ongoing code upkeep. 0
A performance-based generative design framework based on a design grammar for high-rise office towers during early design stage Chen Grasshopper components are used but also Python scripts for ANN retraining and NSGA-II optimisation. 1
A diffusion-based machine learning method for 3D architectural form-finding Zheng Heat-maps converted into meshes through Grasshopper components. LoRA fine-tuning scripts required once, then reusable. 1
Automated residential layout generation and editing using natural language and images Zheng Bespoke deep-learning networks (MFDA-equipped RL-Net, WD-Net) needed and a custom point-based cross-modal representation (CMI-P). Substantial in-house coding required. 0
Generative Architectural Design from Textual Prompts: Enhancing High-Rise Building Concepts for Assisting Architects Yang Ready Python scripts requiring fine-tuning by users 1
A deep learning-based framework for intelligent modeling: From architectural sketch to 3D model Li Although open-source models are used, models are trained on bespoke datasets. Also, tailored Grasshopper definitions for detailing. Some moderate scripting required. 1
Enhancing architectural space layout design by pretraining deep reinforcement learning agents Kakooee Custom Python script environment with custom reward functions and PPO implementation. 0
An Intelligent Natural Language Processing (NLP) Workflow for Automated Smart Building Design Okonta Custom python script for NLP-to-CAD/BIM communication. Middleware for structured data to CAD scripts or commands. 0
A structured prompt framework for AI generated biophilic architectural spaces Lee Core operation depends on bespoke Python scripts for text mining and prompt assembly. 0
A hybrid deep learning approach to investigating architectural morphology: A workflow combining graph and image data to classify high-rise residential building floorplans Wang Custom Python scripts, custom GNN layers, and visualisation code. 0
Table 10. No and % distribution of studies for each Tier score for the TR indicator.
Table 10. No and % distribution of studies for each Tier score for the TR indicator.
Tier Representation type Papers (no) % Distribution
0 Heavy bespoke coding essential 27 65 %
1 Helper scripts or light visual code definitions and macros 11 26 %
2 No custom code; commercial, off-the-self GUI 4 9 %
Table 11. TS score and rationale for each study.
Table 11. TS score and rationale for each study.
Title of Paper Name of First Author TS Rationale TS score
Artificial intelligence in architecture: Generating conceptual design via deep learning As Advanced ML expertise needed (DNNs, GANs, node embeddings) plus familiarity with Python Revit APIs. Well beyond typical architect’s skill-set. 0
Generative Deep Learning in Architectural Design Newton Deep-learning expertise, GPU training know-how, and coding skills needed. Well beyond typical architect’s skill-set. 0
Training deep convolution network with synthetic data for architectural morphological prototype classification Cai CNN knowledge, training pipeline setup, and synthetic data generation skills needed. Well beyond typical architect’s skill-set. 0
An Academy of Spatial Agents: Generating spatial configurations with deep reinforcement learning Veloso Running / tuning demands GPU setup, RL know-how, Python scripting. Well beyond typical architect’s skill-set. 0
On GANs, NLP and Architecture: Combining Human and Machine Intelligences for the Generation and Evaluation of Meaningful Designs Huang Deep-learning expertise, NLP text-mining, GPU workflows, and projective geometry kno-how needed. Well beyond typical architect’s skill-set. 0
A generative architectural and urban design method through artificial neural networks Zheng NN training knowledge, vector encoding of NURBS surfaces, feature-parameter tuning know-how, and Python coding needed. Well beyond typical architect’s skill-set. 0
Self-learning Agents for Spatial Synthesis Veloso Multi-agent deep reinforcement learning (MADRL) knowledge, spatial logic programming, and implementation of custom CNNs skilles needed. Well beyond typical architect’s skill-set. 0
The AI-teration Method and the Role of AI in Architectural Design Danchenko DL expertise needed: GAN training, dataset curation, Python/NLP, and GPU management. Well beyond typical architect’s skill-set. 0
Intuitive Behavior: The Operation of Reinforcement Learning in Generative Design Processes Wang Reinforcement-learning expertise, GPU setup, Unity scripting, and algorithmic-design skills. Well beyond typical architect’s skill-set. 0
Automatic generation of architecture façade for historical urban renovation using generative adversarial network Sun DL expertise needed: GAN hyper-parameters, GPU training. Also image labeling and ML evaluation metric. Well beyond typical architect’s skill-set. 0
Architectural Form Explorations through Generative Adversarial Networks Eroglu Some ML know-how needed (Python, CUDA/GPU management, GAN training). Likely outside technical support required. 1
Design across multi-scale datasets by developing a novel approach to 3DGANs. Ennemoser 3-D GAN architectures, voxel grids, GPU training, and procedural SDF modelling skills are required. Well beyond typical architect’s skill-set. 0
Speculative hybrids: Investigating the generation of Conceptual architectural forms through the use of 3D generative adversarial networks Pouliou Handling of 3-D GAN hyper-parameters, point-cloud data preparation, GPU training, and Python rule scripting skill needed. Well beyond typical architect’s skill-set. 0
Synthesis and generation for 3D architecture volume with generative modeling. Zhuang 3-D deep-learning skills (auto-decoder, GAN, SDF maths), GPU training, and Python data pipelines. Well beyond typical architect’s skill-set. 0
Spatial synthesis for architectural design as an interactive simulation with multiple agents Veloso RL skills, multi-agent systems coding, GPU training, plus Rhino-scripting skills needed. Well beyond typical architect’s skill-set. 0
Using text-to-image generation for architectural design ideation Paananen Only basic prompt literacy is needed. Most study participants were first-time users 2
Using Artificial Intelligence to Generate Master-Quality Architectural Designs from Text Descriptions Chen Only basic prompt literacy is needed. Most study participants were first-time users 2
Research on Architectural Generation Design of Specific Architect's Sketch Based on Image-To-Image Translation Li Deep-learning expertise demanded (CycleGAN, data set curation, GPU training). Far beyond typical architectural skill sets. 0
The Role of Artificial Intelligence for The Architectural Plan Design: Automation in Decision-making Celik Basic prompt-writing skills text-to-image interfaces; no ML training, coding, or GPU setup is necessary. Well within typical architectural capabilities. 2
Generating Conceptual Architectural 3D Geometries with Denoising Diffusion Models Showcasing a deep learning based 3D generative prototype. Sebestyen GAN/diffusion model know-how needed. Also Python scripting and GPU management plus Houdini VEX/VDB familiarity. That is well beyond typical architectural skillsets. 0
AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators Horvath Advanced ML knowledge (dataset curation, model training, Python) essential. well beyond typical architectural skillsets. 0
Data-driven generative contextual design model for building morphology in dense metropolitan areas Peng Users must understand VAE training, dimension reduction, multivariate Random-forest and Grasshopper scripting. This exceeds typical architectural skill sets. 0
Vitruvio: Conditional variational autoencoder to generate building meshes via single perspective sketches Tono Users must understand GPU set-up, VAE training, fine-tuning parameters, checkpoints and AI inference. Well beyond typical architectural skills 0
Automated layout generation from sites to flats using GAN and transfer learning Wang Deploying new projects or retraining demands ML expertise (GAN, transfer learning, GPU setup) and GH scripting. Skills outside the typical architect’s toolkit. 0
Generative artificial intelligence and building design: early photorealistic render visualization of façades using local identity-trained models Jo Preparing a locality-specific dataset, pairing images with text and running DreamBooth-style fine-tuning needs moderate ML knowledge; everyday use afterwards is simpler but still benefits from prompt-engineering skills. 1
Generative early architectural visualizations: incorporating architect’s style-trained models Lee Prompt-engineering skills and minimal ML literacy reuired (how to fine-tune / load LoRA). No DL or CAD scripting is needed for daily use. 1
Generative AI-powered architectural exterior conceptual design based on the design intent Shi Requires moderate ML literacy (dataset curation, prompt engineering, GPU basics). Still beyond typical architect skills without a computational specialist. 1
Generative design experiments with artificial intelligence: reinterpretation of shape grammar Celik Basic prompt-engineering and platform quirks needed, but no ML, coding or CAD knowledge is necessary. 2
SketchPLAN Recognition and Vectorization of Floor Plan Sketches for Building Information Modelling Design Environment Abdelmoula Users must handle dataset annotation, GAN training, Python, OpenCV, Grasshopper scripting and Rhino.Inside APIs. That is well beyond typical architectural skills. 0
Autocompletion of Architectural Spatial Configurations Using Case-Based Reasoning, Graph Clustering, and Deep Learning Eisenstadt Requires understanding of graph theory, case-based reasoning workflows, GNN training, Python scripting and managing a GPU environment. Well beyond typical architectural practice skills. 0
Using Generative Adversarial Networks to Create 3D Building Geometries Mueller Effective deployment needs parallel computing setup, GAN training experience, and mesh post-processing. Mostly outside the average architect’s toolkit. 0
Research on Machine Learning-assisted Floor Plan Generation in Old-style Residential Buildings: Taking Tong Lau in Macau as an Example Tam Besides image editing of the datasets the method requires ML specialists for cGAN training on a parallel processing GPU platform. 0
Research on Interior Intelligent Design System Based On Image Generation Technology Zhang Requires ComfyUI graph-node management, LoRA management and optional node editing. Moderate ML literacy is essential. 1
A performance-based generative design framework based on a design grammar for high-rise office towers during early design stage Chen ANN training, GPU familiarity and multi-objective optimisation know-how needed. Well beyond architect’s toolkit. 0
A diffusion-based machine learning method for 3D architectural form-finding Zheng LoRA fine-tuning, Stable Diffusion samples, and depth/Canny management needed. Still beyond most mainstream architectural skillset. 0
Automated residential layout generation and editing using natural language and images Zheng Prompting skills needed but deploying / retraining the models still needs GPU hardware and some ML expertise. 1
Generative Architectural Design from Textual Prompts: Enhancing High-Rise Building Concepts for Assisting Architects Yang Moderate ML literacy. Input from computational designer is most likely needed. 1
A deep learning-based framework for intelligent modeling: From architectural sketch to 3D model Li Effective use demands GPU resources, dataset curation, DL training, plus advanced Grasshopper/plug-in skills. Well beyond typical architectural skillsets. 0
Enhancing architectural space layout design by pretraining deep reinforcement learning agents Kakooee Effective use requires RL know-how, python scripting/debugging, and parallel processing set-up. Skills well beyond a typical architect. 0
An Intelligent Natural Language Processing (NLP) Workflow for Automated Smart Building Design Okonta Successful deployment requires NLP model training, API programming, schema versioning and error-handling strategies. Not routine architectural skillsets. 0
A structured prompt framework for AI generated biophilic architectural spaces Lee Running the pipeline demands prompt-engineering across and Python/NLP skill. These are beyond typical architectural skills. 0
A hybrid deep learning approach to investigating architectural morphology: A workflow combining graph and image data to classify high-rise residential building floorplans Wang Competence in deep learning and data-science (PyTorch, ResNet, GNN) demanded. Skills uncommon for most typical architects. 0
Table 12. No and % distribution of studies for each Tier score for the TS indicator.
Table 12. No and % distribution of studies for each Tier score for the TS indicator.
Tier Definition Papers (n) % Distribution
0 High specialist demand 31 74%
1 Moderate scripting literacy 7 17%
2 Ordinary design skills 4 9%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated