Visible to the public Biblio

Filters: Keyword is Image analysis  [Clear All Filters]
2022-07-05
Fallah, Zahra, Ebrahimpour-Komleh, Hossein, Mousavirad, Seyed Jalaleddin.  2021.  A Novel Hybrid Pyramid Texture-Based Facial Expression Recognition. 2021 5th International Conference on Pattern Recognition and Image Analysis (IPRIA). :1—6.
Automated analysis of facial expressions is one of the most interesting and challenging problems in many areas such as human-computer interaction. Facial images are affected by many factors, such as intensity, pose and facial expressions. These factors make facial expression recognition problem a challenge. The aim of this paper is to propose a new method based on the pyramid local binary pattern (PLBP) and the pyramid local phase quantization (PLPQ), which are the extension of the local binary pattern (LBP) and the local phase quantization (LPQ) as two methods for extracting texture features. LBP operator is used to extract LBP feature in the spatial domain and LPQ operator is used to extract LPQ feature in the frequency domain. The combination of features in spatial and frequency domains can provide important information in both domains. In this paper, PLBP and PLPQ operators are separately used to extract features. Then, these features are combined to create a new feature vector. The advantage of pyramid transform domain is that it can recognize facial expressions efficiently and with high accuracy even for very low-resolution facial images. The proposed method is verified on the CK+ facial expression database. The proposed method achieves the recognition rate of 99.85% on CK+ database.
2022-06-09
Fu, Chen, Rui, Yu, Wen-mao, Liu.  2021.  Internet of Things Attack Group Identification Model Combined with Spectral Clustering. 2021 IEEE 21st International Conference on Communication Technology (ICCT). :778–782.
In order to solve the problem that the ordinary intrusion detection model cannot effectively identify the increasingly complex, continuous, multi-source and organized network attacks, this paper proposes an Internet of Things attack group identification model to identify the planned and organized attack groups. The model takes the common attack source IP, target IP, time stamp and target port as the characteristics of the attack log data to establish the identification benchmark of the attack gang behavior. The model also combines the spectral clustering algorithm to cluster different attackers with similar attack behaviors, and carries out the specific image analysis of the attack gang. In this paper, an experimental detection was carried out based on real IoT honey pot attack log data. The spectral clustering was compared with Kmeans, DBSCAN and other clustering algorithms. The experimental results shows that the contour coefficient of spectral clustering was significantly higher than that of other clustering algorithms. The recognition model based on spectral clustering proposed in this paper has a better effect, which can effectively identify the attack groups and mine the attack preferences of the groups.
2022-05-23
Iglesias, Maria Insa, Jenkins, Mark, Morison, Gordon.  2021.  An Enhanced Photorealistic Immersive System using Augmented Situated Visualization within Virtual Reality. 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). :514–515.
This work presents a system which allows image data and extracted features from a real-world location to be captured and modelled in a Virtual Reality (VR) environment combined with Augmented Situated Visualizations (ASV) overlaid and registered in a virtual environment. Combining these technologies with techniques from Data Science and Artificial Intelligence (AI)(such as image analysis and 3D reconstruction) allows the creation of a setting where remote locations can be modelled and interacted with from anywhere in the world. This Enhanced Photorealistic Immersive (EPI) system is highly adaptable to a wide range of use cases and users as it can be utilized to model and interact with any environment which can be captured as image data (such as training for operation in hazardous environments, accessibility solutions for exploration of historical/tourism locations and collaborative learning environments). A use case example focused on a structural examination of railway tunnels along with a pilot study is presented, which can demonstrate the usefulness of the EPI system.
2021-05-25
Murguia, Carlos, Tabuada, Paulo.  2020.  Privacy Against Adversarial Classification in Cyber-Physical Systems. 2020 59th IEEE Conference on Decision and Control (CDC). :5483–5488.
For a class of Cyber-Physical Systems (CPSs), we address the problem of performing computations over the cloud without revealing private information about the structure and operation of the system. We model CPSs as a collection of input-output dynamical systems (the system operation modes). Depending on the mode the system is operating on, the output trajectory is generated by one of these systems in response to driving inputs. Output measurements and driving inputs are sent to the cloud for processing purposes. We capture this "processing" through some function (of the input-output trajectory) that we require the cloud to compute accurately - referred here as the trajectory utility. However, for privacy reasons, we would like to keep the mode private, i.e., we do not want the cloud to correctly identify what mode of the CPS produced a given trajectory. To this end, we distort trajectories before transmission and send the corrupted data to the cloud. We provide mathematical tools (based on output-regulation techniques) to properly design distorting mechanisms so that: 1) the original and distorted trajectories lead to the same utility; and the distorted data leads the cloud to misclassify the mode.
2021-04-09
Lin, T., Shi, Y., Shu, N., Cheng, D., Hong, X., Song, J., Gwee, B. H..  2020.  Deep Learning-Based Image Analysis Framework for Hardware Assurance of Digital Integrated Circuits. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1—6.
We propose an Artificial Intelligence (AI)/Deep Learning (DL)-based image analysis framework for hardware assurance of digital integrated circuits (ICs). Our aim is to examine and verify various hardware information from analyzing the Scanning Electron Microscope (SEM) images of an IC. In our proposed framework, we apply DL-based methods at all essential steps of the analysis. To the best of our knowledge, this is the first such framework that makes heavy use of DL-based methods at all essential analysis steps. Further, to reduce time and effort required in model re-training, we propose and demonstrate various automated or semi-automated training data preparation methods and demonstrate the effectiveness of using synthetic data to train a model. By applying our proposed framework to analyzing a set of SEM images of a large digital IC, we prove its efficacy. Our DL-based methods are fast, accurate, robust against noise, and can automate tasks that were previously performed mainly manually. Overall, we show that DL-based methods can largely increase the level of automation in hardware assurance of digital ICs and improve its accuracy.
2021-03-29
Al-Janabi, S. I. Ali, Al-Janabi, S. T. Faraj, Al-Khateeb, B..  2020.  Image Classification using Convolution Neural Network Based Hash Encoding and Particle Swarm Optimization. 2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI). :1–5.
Image Retrieval (IR) has become one of the main problems facing computer society recently. To increase computing similarities between images, hashing approaches have become the focus of many programmers. Indeed, in the past few years, Deep Learning (DL) has been considered as a backbone for image analysis using Convolutional Neural Networks (CNNs). This paper aims to design and implement a high-performance image classifier that can be used in several applications such as intelligent vehicles, face recognition, marketing, and many others. This work considers experimentation to find the sequential model's best configuration for classifying images. The best performance has been obtained from two layers' architecture; the first layer consists of 128 nodes, and the second layer is composed of 32 nodes, where the accuracy reached up to 0.9012. The proposed classifier has been achieved using CNN and the data extracted from the CIFAR-10 dataset by the inception model, which are called the Transfer Values (TRVs). Indeed, the Particle Swarm Optimization (PSO) algorithm is used to reduce the TRVs. In this respect, the work focus is to reduce the TRVs to obtain high-performance image classifier models. Indeed, the PSO algorithm has been enhanced by using the crossover technique from genetic algorithms. This led to a reduction of the complexity of models in terms of the number of parameters used and the execution time.
2021-02-08
Wang, R., Li, L., Hong, W., Yang, N..  2009.  A THz Image Edge Detection Method Based on Wavelet and Neural Network. 2009 Ninth International Conference on Hybrid Intelligent Systems. 3:420—424.

A THz image edge detection approach based on wavelet and neural network is proposed in this paper. First, the source image is decomposed by wavelet, the edges in the low-frequency sub-image are detected using neural network method and the edges in the high-frequency sub-images are detected using wavelet transform method on the coarsest level of the wavelet decomposition, the two edge images are fused according to some fusion rules to obtain the edge image of this level, it then is projected to the next level. Afterwards the final edge image of L-1 level is got according to some fusion rule. This process is repeated until reaching the 0 level thus to get the final integrated and clear edge image. The experimental results show that our approach based on fusion technique is superior to Canny operator method and wavelet transform method alone.

2020-12-11
Mikołajczyk, A., Grochowski, M..  2019.  Style transfer-based image synthesis as an efficient regularization technique in deep learning. 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR). :42—47.

These days deep learning is the fastest-growing area in the field of Machine Learning. Convolutional Neural Networks are currently the main tool used for the image analysis and classification purposes. Although great achievements and perspectives, deep neural networks and accompanying learning algorithms have some relevant challenges to tackle. In this paper, we have focused on the most frequently mentioned problem in the field of machine learning, that is relatively poor generalization abilities. Partial remedies for this are regularization techniques e.g. dropout, batch normalization, weight decay, transfer learning, early stopping and data augmentation. In this paper we have focused on data augmentation. We propose to use a method based on a neural style transfer, which allows to generate new unlabeled images of high perceptual quality that combine the content of a base image with the appearance of another one. In a proposed approach, the newly created images are described with pseudo-labels, and then used as a training dataset. Real, labeled images are divided into the validation and test set. We validated proposed method on a challenging skin lesion classification case study. Four representative neural architectures are examined. Obtained results show the strong potential of the proposed approach.

2020-12-01
Garbo, A., Quer, S..  2018.  A Fast MPEG’s CDVS Implementation for GPU Featured in Mobile Devices. IEEE Access. 6:52027—52046.
The Moving Picture Experts Group's Compact Descriptors for Visual Search (MPEG's CDVS) intends to standardize technologies in order to enable an interoperable, efficient, and cross-platform solution for internet-scale visual search applications and services. Among the key technologies within CDVS, we recall the format of visual descriptors, the descriptor extraction process, and the algorithms for indexing and matching. Unfortunately, these steps require precision and computation accuracy. Moreover, they are very time-consuming, as they need running times in the order of seconds when implemented on the central processing unit (CPU) of modern mobile devices. In this paper, to reduce computation times and maintain precision and accuracy, we re-design, for many-cores embedded graphical processor units (GPUs), all main local descriptor extraction pipeline phases of the MPEG's CDVS standard. To reach this goal, we introduce new techniques to adapt the standard algorithm to parallel processing. Furthermore, to reduce memory accesses and efficiently distribute the kernel workload, we use new approaches to store and retrieve CDVS information on proper GPU data structures. We present a complete experimental analysis on a large and standard test set. Our experiments show that our GPU-based approach is remarkably faster than the CPU-based reference implementation of the standard, and it maintains a comparable precision in terms of true and false positive rates.
2020-03-04
Puteaux, Pauline, Puech, William.  2019.  Image Analysis and Processing in the Encrypted Domain. 2019 IEEE International Conference on Image Processing (ICIP). :3020–3022.

In this research project, we are interested by finding solutions to the problem of image analysis and processing in the encrypted domain. For security reasons, more and more digital data are transferred or stored in the encrypted domain. However, during the transmission or the archiving of encrypted images, it is often necessary to analyze or process them, without knowing the original content or the secret key used during the encryption phase. We propose to work on this problem, by associating theoretical aspects with numerous applications. Our main contributions concern: data hiding in encrypted images, correction of noisy encrypted images, recompression of crypto-compressed images and secret image sharing.

2020-02-26
Shi, Qihang, Vashistha, Nidish, Lu, Hangwei, Shen, Haoting, Tehranipoor, Bahar, Woodard, Damon L, Asadizanjani, Navid.  2019.  Golden Gates: A New Hybrid Approach for Rapid Hardware Trojan Detection Using Testing and Imaging. 2019 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :61–71.

Hardware Trojans are malicious modifications on integrated circuits (IC), which pose a grave threat to the security of modern military and commercial systems. Existing methods of detecting hardware Trojans are plagued by the inability of detecting all Trojans, reliance on golden chip that might not be available, high time cost, and low accuracy. In this paper, we present Golden Gates, a novel detection method designed to achieve a comparable level of accuracy to full reverse engineering, yet paying only a fraction of its cost in time. The proposed method inserts golden gate circuits (GGC) to achieve superlative accuracy in the classification of all existing gate footprints using rapid scanning electron microscopy (SEM) and backside ultra thinning. Possible attacks against GGC as well as malicious modifications on interconnect layers are discussed and addressed with secure built-in exhaustive test infrastructure. Evaluation with real SEM images demonstrate high classification accuracy and resistance to attacks of the proposed technique.

2019-02-22
Wang, Xiangwen, Peng, Peng, Wang, Chun, Wang, Gang.  2018.  You Are Your Photographs: Detecting Multiple Identities of Vendors in the Darknet Marketplaces. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :431-442.

Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts ($\backslash$em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.

2017-10-04
Jaume-i-Capó, Antoni, Mena-Barco, Carlos, Moyà-Alcover, Biel.  2016.  Analysis of Blood Cell Morphology in Touch-based Devices Using a CAPTCHA. Proceedings of the XVII International Conference on Human Computer Interaction. :27:1–27:2.
In this paper, we present an experimental system for controlling human access to information systems. Also, the system allows analyzing the morphology of red blood cells of microscope images of patients with sicklemia.
2015-05-05
Voigt, S., Schoepfer, E., Fourie, C., Mager, A..  2014.  Towards semi-automated satellite mapping for humanitarian situational awareness. Global Humanitarian Technology Conference (GHTC), 2014 IEEE. :412-416.

Very high resolution satellite imagery used to be a rare commodity, with infrequent satellite pass-over times over a specific area-of-interest obviating many useful applications. Today, more and more such satellite systems are available, with visual analysis and interpretation of imagery still important to derive relevant features and changes from satellite data. In order to allow efficient, robust and routine image analysis for humanitarian purposes, semi-automated feature extraction is of increasing importance for operational emergency mapping tasks. In the frame of the European Earth Observation program COPERNICUS and related research activities under the European Union's Seventh Framework Program, substantial scientific developments and mapping services are dedicated to satellite based humanitarian mapping and monitoring. In this paper, recent results in methodological research and development of routine services in satellite mapping for humanitarian situational awareness are reviewed and discussed. Ethical aspects of sensitivity and security of humanitarian mapping are deliberated. Furthermore methods for monitoring and analysis of refugee/internally displaced persons camps in humanitarian settings are assessed. Advantages and limitations of object-based image analysis, sample supervised segmentation and feature extraction are presented and discussed.
 

2015-05-04
Kaghaz-Garan, S., Umbarkar, A., Doboli, A..  2014.  Joint localization and fingerprinting of sound sources for auditory scene analysis. Robotic and Sensors Environments (ROSE), 2014 IEEE International Symposium on. :49-54.

In the field of scene understanding, researchers have mainly focused on using video/images to extract different elements in a scene. The computational as well as monetary cost associated with such implementations is high. This paper proposes a low-cost system which uses sound-based techniques in order to jointly perform localization as well as fingerprinting of the sound sources. A network of embedded nodes is used to sense the sound inputs. Phase-based sound localization and Support-Vector Machine classification are used to locate and classify elements of the scene, respectively. The fusion of all this data presents a complete “picture” of the scene. The proposed concepts are applied to a vehicular-traffic case study. Experiments show that the system has a fingerprinting accuracy of up to 97.5%, localization error less than 4 degrees and scene prediction accuracy of 100%.