ARTICLE | doi:10.20944/preprints201704.0165.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: synthetic aperture radar; features extraction; saliency detection; image fusion
Online: 26 April 2017 (06:06:19 CEST)
Saliency detection in synthetic aperture radar (SAR) image is a difficult problem. This paper proposed a multitask saliency detection (MSD) model for the saliency detection task of SAR image. Firstly, we extract four features of SAR image as the input of the MSD model, which include the intensity, orientation, uniqueness and global contrast. Then, the saliency map is generated by the multitask sparsity pursuit (MTSP) which integrates the multiple features collaboratively. Subjective and objective evaluation of the MSD model verifies its effectiveness. Based on the saliency maps of the source images, an image fusion method is proposed for the SAR and color optical image fusion. The experimental results of real data show the proposed image fusion method is superior to the presenting methods in terms of several universal quality evaluation indexes, as well as in the visual quality. The salient areas in the SAR image can be highlighted and the spatial and spectral details of color optical image can also be preserved in the fusion result.
REVIEW | doi:10.20944/preprints202010.0388.v1
Subject: Engineering, Automotive Engineering Keywords: Autism Spectrum Disorder; activity analysis; automated detection; repetitive behavior; abnormal gait; visual saliency
Online: 19 October 2020 (14:49:24 CEST)
Autism Spectrum Disorder (ASD) is a neuro-developmental disorder that limits social interactions, cognitive skills, and abilities. Since ASD can last during an affected person's entire life cycle, the diagnosis at the early onset can yield a significant positive impact. The current medical diagnostic systems (e.g., DSM-5/ICD-10) are somewhat subjective; rely purely on the behavioral observation of symptoms, and hence, some individuals often go misdiagnosed or late-diagnosed. Therefore, researchers have focused on developing data-driven automated diagnosis systems with less screening time, low cost, and improved accuracy while significantly reducing professional intervention. Human Activity Analysis (HAA) is considered one of the most promising niches in computer vision research. This paper aims to analyze its potentialities in the automated detection of autism by tracking the exclusive characteristics of autistic individuals such as repetitive behavior, atypical walking style, and unusual visual saliency. This review provides a detailed inspection of HAA-based autism detection literature published in 2011 on-wards depicting core approaches, challenges, probable solutions, available resources, and scopes of future exploration in this arena. According to our study, deep learning outperforms machine learning in ASD detection with a classification accuracy of 76\% to 95\% on different datasets comprise of video, image, or skeleton data that recorded participants performing a large number of actions. However, machine learning provides satisfactory results on datasets with a small number of action classes and has a range of 60\% to 93\% accuracy among numerous studies. We hope this extensive review will provide a comprehensive guideline for researchers in this field.
ARTICLE | doi:10.20944/preprints201804.0251.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: ATR; ISAR/SAR images; saliency attention; SIFT; multitask-SRC
Online: 19 April 2018 (10:32:02 CEST)
In this paper, we propose a novel approach to recognize radar targets on inverse synthetic aperture radar (ISAR) and synthetic aperture radar (SAR) images. This approach is based on the multiple salient keypoint descriptors (MSKD) and multitask sparse representation based classification (MSRC). Thus, to characterize the targets in the radar images, we combine the scale-invariant feature transform (SIFT) and the saliency map. The goal of this combination is to reduce the SIFT keypoints and their time computing time by maintaining only those located in the target area (salient region). Then, we compute the feature vectors of the resulting salient SIFT keypoints (MSKD). This methodology is applied for both training and test images. The MSKD of the training images leads to construct the dictionary of a sparse convex optimization problem. To achieve the recognition, we adopt the MSRC taking into consideration each vector in the MSKD as a task. This classifier solves the sparse representation problem for each task over the dictionary and determines the class of the radar image according to all sparse reconstruction errors (residuals). The effectiveness of the proposed approach method has been demonstrated by a set of extensive empirical results on ISAR and SAR images databases. The results show the ability of our method to predict adequately the aircraft and the ground targets.
ARTICLE | doi:10.20944/preprints201710.0093.v1
Subject: Engineering, Automotive Engineering Keywords: SAR image; Visual attention model; Texture Saliency; Feature map; Focus of attention
Online: 13 October 2017 (17:08:14 CEST)
Targets detection in synthetic aperture radar (SAR) remote sensing images, which is a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Besides, the ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of SAR image scene still remains a challenge. This paper analyzes the defects and shortcomings of traditional visual models applied to SAR images. Then a visual attention model designed for SAR images is proposed. The model draws the basic framework of classical ITTI model; selects and extracts the texture features and other features that can describe the SAR image better. We proposes a new algorithm for computing the local texture saliency of the input image, then the model constructs the corresponding saliency maps of features; Next, a new mechanism of feature fusion is adopted to replace the linear additive mechanism of classical models to obtain the overall saliency map; Finally, the gray-scale characteristics of focus of attention (FOA) in saliency map of all features are taken into account, our model choose the best saliency representation, Through the multi-scale competition strategy, the filter and threshold segmentation of the saliency maps can be used to select the salient regions accurately, thereby completing this operation for the visual saliency detection in SAR images. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X), Radarsat-2, are used to evaluate the performance of visual models. The results show that our model provides superior performance compared with classical visual models. By further contrasting with the classical visual models, Our model reduce the false alarm caused by speckle noise, and its detection speed is greatly improved, and it is increased by 25% to 45%.
ARTICLE | doi:10.20944/preprints202010.0335.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Object perception; Reflection symmetry; Saliency Symmetry Model; Isotropic symmetry operator; Multi-scale implementation
Online: 15 October 2020 (16:32:18 CEST)
This paper presents an optimized feature-centered reflection symmetry axis detection and localization framework for object perception. The proposed framework is formed to obtain an improved reflection symmetry axis based on the salient symmetry feature. It starts with a refined Multi-scale Saliency Symmetry Model (MSSM), which is realized by applying isotropic symmetry operator on salient points in scale-space rather than all pixels. In each scale, salient points are initially extracted as local extremal from an image, and they are further refined by a multi-scale implementation for generating salient symmetry feature maps. A Symmetric Transformation Matrix is then computed using the optimal feature matching pairs, which can be explicitly used as an abstract representation of the constraint regions of symmetry objects in an image to optimize the performance of the potential symmetry axis detection. The framework has been investigated experimentally both on the classical dataset from a symmetry detection challenge and the latest dataset. It has shown that the framework can get a better or comparative result and also can be further adapted into terminated human--computer equipment for reflection symmetry object perception and tracking.
ARTICLE | doi:10.20944/preprints201904.0322.v1
Subject: Engineering, Mechanical Engineering Keywords: aluminum profile surface defects; multiscale defect detection network; deep learning; average precision(AP); saliency maps
Online: 29 April 2019 (09:37:07 CEST)
Aluminum profile surface defects can greatly affect the performance, safety and reliability of products. Traditional human-based visual inspection is low accuracy and time consuming, and machine vision-based methods depend on hand-crafted features which need to be carefully designed and lack robustness. To recognize the multiple types of defects with various size on aluminum profiles, a multiscale defect detection network based on deep learning is proposed. Then, the network is trained and evaluated using aluminum profile surface defects images. Results show 84.6%, 48.5%, 96.9%, 97.9%, 96.9%, 42.5%, 47.2%, 100%, 100%, 43.3% average precision(AP) for the ten defect categories, respectively, with a mean AP of 75.8%, which illustrate the effectiveness of the network in aluminum profile surface defects detection. In addition, saliency maps also show the feasibility of the proposed network.