Visible to the public Biblio

Filters: Keyword is multi-scale  [Clear All Filters]
2020-12-21
Han, K., Zhang, W., Liu, C..  2020.  Numerical Study of Acoustic Propagation Characteristics in the Multi-scale Seafloor Random Media. 2020 IEEE 3rd International Conference on Information Communication and Signal Processing (ICICSP). :135–138.
There is some uncertainty as to the applicability or accuracy of current theories for wave propagation in sediments. Numerical modelling of acoustic data has long been recognized to be a powerful method of understanding of complicated wave propagation and interaction. In this paper, we used the coupled two-dimensional PSM-BEM program to simulate the process of acoustic wave propagation in the seafloor with distributed multi-scale random media. The effects of fluid flow between the pores and the grains with multi-scale distribution were considered. The results show that the coupled PSM-BEM program can be directly applied to both high and low frequency seafloor acoustics. A given porous frame with the pore space saturated with fluid can greatly increase the magnitude of acoustic anisotropy. acoustic wave velocity dispersion and attenuation are significant over a frequency range which spans at least two orders of magnitude.
2019-05-01
Zhu, Dandan, Dai, Lei, Zhang, Guokai, Shao, Xuan, Luo, Ye, Lu, Jianwei.  2018.  MAFL: Multi-Scale Adversarial Feature Learning for Saliency Detection. Proceedings of the 2018 International Conference on Control and Computer Vision. :90-95.

Previous saliency detection methods usually focus on extracting features to deal with the complex background in an image. However, these methods cannot effectively capture the semantic information of images. In recent years, Generative Adversarial Network (GAN) has become a prevalent research topic. Experiments show that GAN has ability to generate high quality images that look like natural images. Inspired by the effectiveness of GAN feature learning, we propose a novel multi-scale adversarial feature learning (MAFL) model for saliency detection. In particular, we model the complete framework of saliency detection is based on two deep CNN modules: the multi-scale G-network takes natural images as inputs and generates corresponding synthetic saliency map, and we designed a novel layer in D-network, namely a correlation layer, which is used to determine whether one image is a synthetic saliency map or ground-truth saliency map. Quantitative and qualitative experiments on three benchmark datasets demonstrate that our method outperforms seven state-of-the-art methods.