ARTICLE | doi:10.20944/preprints202304.0129.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Deep Learning; Semantic Segmentation; Radiotherapy; Multimodality
Online: 7 April 2023 (13:27:19 CEST)
In this work we introduce an end-to-end multi-modal neural network to segment the Gross Tumor Volume (GTV) from 3D-CBCT’s during radiotherapy. We improve the tumor segmentation by using a U-net which takes additional information such as the tumor mask generated at the planning phase along with the CBCT volume. The mask contains relevant information about the tumor’s location and can guide the model to use this knowledge appropriately to give a better prediction. This technique could become an alternative to produce segmentation masks of GTV in CBCT automatically during radiotherapy as in the traditional RT-pipeline, they are not segmented. We have evaluated our model on a dataset of 82 patients who have undergone radiotherapy. We compare the results of registered target volumes from planning CT as mask seed with 2 different types of multi-modal architectures. Our model shows a DSC of 0.706±0.002 with Late Fusion and 0.702±0.015 with Early Fusion using the GTV Mask. The performance of the two models on this mask is similar, so we perform further experiments with different types of masks which suggest that Late Fusion model produces a better segmentation of the tumor than the Early Fusion model. We also provide an ablation study consisting of a single modality U-net and a metric based on the Planning CT mask registration. It indicates a clear advantage of using our model to produce segmentation for this type of imaging.
REVIEW | doi:10.20944/preprints202308.1514.v1
Subject: Medicine And Pharmacology, Cardiac And Cardiovascular Systems Keywords: multimodality imaging; advanced heart failure; extracorporeal cardiac support; cardiac transplant
Online: 22 August 2023 (07:49:59 CEST)
Advanced heart failure (AHF) presents a complex landscape with challenges spanning diagnosis, management, and patient outcomes. In response, the integration of multimodality imaging techniques has emerged as a pivotal approach. This comprehensive review delves into the profound significance of these imaging strategies within AHF scenarios. Multimodality imaging, encompassing echocardiography, cardiac magnetic resonance imaging (CMR), and cardiac computed tomography (CCT), stands as a cornerstone in the care of patients with both short- and long-term mechanical support devices. These techniques facilitate precise device selection, placement, and vigilant monitoring, ensuring patient safety and optimal device functionality. In the context of orthotopic cardiac transplant (OTC), the role of multimodality imaging remains indispensable. Echocardiography offers invaluable insights into allograft function and potential complications. Advanced methods, like speckle tracking echocardiography (STE), empower the detection of acute cell rejection. CMR and CCT further enhance diagnostic precision, especially concerning allograft rejection and cardiac allograft vasculopathy. This comprehensive imaging approach goes beyond diagnosis, shaping treatment strategies and risk assessment. By harmonizing diverse imaging modalities, clinicians gain a panoramic understanding of each patient's unique condition, facilitating well-informed decisions. Thus, this review underscores the irreplaceable role of multimodality imaging in elevating patient outcomes, refining treatment precision, and propelling advancements in the evolving landscape of advanced heart failure management.
ARTICLE | doi:10.20944/preprints202208.0406.v1
Subject: Computer Science And Mathematics, Software Keywords: Smartphone; App Usage; Transport Mode Usage; Latent Class Cluster Analysis; Multimodality; Environment
Online: 24 August 2022 (03:59:57 CEST)
Smartphone-based mobility apps enable users to make informed transportation decisions, offering instant access to transport-related information. This development has created a smartphone-enabled ecosystem of mobility services in developed countries while it is slowly picking up pace in the global south, which can contribute towards the decarbonization of urban transport. Work on this has already started in India, and there is considerable evidence indicating the profound impact of these apps on the perceived utility and usage of transport modes, with far-reaching implications for sustainable development goals (SDGs). However, for most users, the use of smartphone apps is a novel trend, and the knowledge of the impacts of usage of existing apps on the usage pattern of transport modes by various user groups is essential for positioning new consolidated app-based services soon. Against this backdrop, the present study uses latent class cluster analysis to empirically investigate the impacts of mobility apps on transport mode usage patterns in Delhi by classifying users into latent classes based on socioeconomic characteristics, attitudes/preferences, smartphone app usage, and mode usage pattern. The characteristics of the latent class and factors affecting the individual’s probability of being classified to these cluster have been discussed, along with some measures to encourage app-based mobility for each cluster.
ARTICLE | doi:10.20944/preprints202306.0922.v1
Subject: Computer Science And Mathematics, Signal Processing Keywords: Multimodality medical image; Image fusion; Sparse representation (SR); Kronecker criterion; Activity level measure
Online: 13 June 2023 (10:09:15 CEST)
Multimodal medical image fusion is a fundamental but challenging problem in the fields of brain science research and brain disease diagnosis, and it is challenging for sparse representation (SR)-based fusion to characterize activity level with single measurement and no loss of effective information. In this paper, the Kronecker-criterion-based SR framework is applied for medical image fusion with a patch-based activity level integrating salient features of multiple domains. Inspired by the formation process of vision system, the spatial saliency is characterized by textural contrast (TC), which is composed of luminance and orientation contrasts to promote more highlighted texture information to participate in the fusion process. As substitution of the conventional l1-norm-based sparse saliency, a metric of sum of sparse salient features (SSSF) is used for promoting more significant coefficients to participate in the composition of activity level measure. The designed activity level measure is verified to be more conducive to maintain the integrity and sharpness of detailed information. Various experiments on multiple groups of clinical medical images verify the effectiveness of the proposed fusion method on both visual quality and objective assessment. Furthermore, the research work of this paper is helpful for further detection and segmentation of medical images.
REVIEW | doi:10.20944/preprints202012.0149.v1
Subject: Medicine And Pharmacology, Immunology And Allergy Keywords: neurological wake-up test; multimodality monitoring; neurologic examination; daily-interruption of sedation; traumatic brain injury
Online: 7 December 2020 (12:41:39 CET)
Sedation is a ubiquitous practice in ICUs and NCCUs. It has the benefit of reducing cerebral energy demands, but also precludes an accurate neurologic assessment. Because of this, sedation is intermittently stopped for the purposes of a neurologic assessment, which is termed a neurologic wake-up test (NWT). NWTs are considered to be the gold-standard in continued assessment of brain-injured patients under sedation. NWTs also produce an acute stress response that is accompanied by elevations in blood pressure, respiratory rate, heart rate, and ICP. Utilization of cerebral microdialysis and brain tissue oxygen monitoring indicates that this is not mirrored by alterations in overall cerebral metabolism, and seldom affects oxygenation. The hard contraindications for the NWT are preexisting intracranial hypertension, barbiturate treatment, status epilepticus, and hyperthermia. However, hemodynamic instability, sedative use for primary ICP control, and sedative use for severe agitation or respiratory distress are considered significant safety concerns. Despite ubiquitous recommendation, it is not clear if additional clinically relevant information is gleaned through its use, especially with the contemporaneous utilization of multimodality monitoring. Various monitoring modalities provide unique and pertinent information about neurologic function, however, their role in improving patient outcomes and guiding treatment plans has not been fully elucidated. There is a paucity of information pertaining to the optimal frequency of NWTs, and if it differs based on type of injury. Only one concrete recommendation was found in the literature, exemplifying the uncertainty surrounding its utility. The most common sedative used and recommended is propofol because of its rapid onset, short duration, and reduction of cerebral energy requirements. Dexmedetomidine may be employed to facilitate serial NWTs, and should always be used in the non-intubated patient or if propofol infusion syndrome (PRIS) develops. Midazolam is not recommended due to tissue accumulation and residual sedation confounding a reliable NWT. Thus, NWTs are well tolerated in most patients and remain recommended as the gold-standard for continued neuromonitoring. Predicated upon one expert panel, they should be performed at least one time per day. Propofol or dexmedetomidine are the main sedative choices, both enabling a rapid awakening and consistent NWT.
REVIEW | doi:10.20944/preprints202305.0105.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: Review; Human action recognition; Smart living; Multimodality; Real-time processing; Interoperability; Resource-constrained processing; Sensing technology; Machine learning; Deep learning; Signal processing; Smart home; Smart environment; Smart city; Smart Community; Ambient Assisted Living
Online: 3 May 2023 (06:54:40 CEST)
Smart living, a concept that has gained increasing attention in recent years, revolves around integrating advanced technologies in homes and cities to enhance the quality of life for citizens. Sensing and human action recognition are crucial aspects of this concept. Smart living applications span various domains, such as energy consumption, healthcare, transportation, and education, which greatly benefit from effective human action recognition. This field, originating from computer vision, seeks to recognize human actions and activities using not only visual data but also many other sensor modalities. This paper comprehensively reviews the literature on human action recognition in smart living environments, synthesizing the main contributions, challenges, and future research directions. This review selects five key domains: Sensing Technology, Multimodality, Real-time Processing, Interoperability, and Resource-Constrained Processing, as they encompass the critical aspects required for successfully deploying human action recognition in smart living. These domains highlight the essential role that sensing and human action recognition play in successfully developing and implementing smart living solutions. This paper serves as a valuable resource for researchers and practitioners seeking to explore further and advance the field of human action recognition in smart living.