Visible to the public Biblio

Filters: Keyword is vision  [Clear All Filters]
2023-03-31
Vinod, G., Padmapriya, Dr. G..  2022.  An Intelligent Traffic Surveillance for Detecting Real-Time Objects Using Deep Belief Networks over Convolutional Neural Networks with improved Accuracy. 2022 International Conference on Business Analytics for Technology and Security (ICBATS). :1–4.
Aim: Object Detection is one of the latest topics in today’s world for detection of real time objects using Deep Belief Networks. Methods & Materials: Real-Time Object Detection is performed using Deep Belief Networks (N=24) over Convolutional Neural Networks (N=24) with the split size of training and testing dataset 70% and 30% respectively. Results: Deep Belief Networks has significantly better accuracy (81.2%) compared to Convolutional Neural Networks (47.7%) and attained significance value of p = 0.083. Conclusion: Deep Belief Networks achieved significantly better object detection than Convolutional Neural Networks for identifying real-time objects in traffic surveillance.
2021-06-24
Pashchenko, Ivan, Scandariato, Riccardo, Sabetta, Antonino, Massacci, Fabio.  2021.  Secure Software Development in the Era of Fluid Multi-party Open Software and Services. 2021 IEEE/ACM 43rd International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER). :91—95.
Pushed by market forces, software development has become fast-paced. As a consequence, modern development projects are assembled from 3rd-party components. Security & privacy assurance techniques once designed for large, controlled updates over months or years, must now cope with small, continuous changes taking place within a week, and happening in sub-components that are controlled by third-party developers one might not even know they existed. In this paper, we aim to provide an overview of the current software security approaches and evaluate their appropriateness in the face of the changed nature in software development. Software security assurance could benefit by switching from a process-based to an artefact-based approach. Further, security evaluation might need to be more incremental, automated and decentralized. We believe this can be achieved by supporting mechanisms for lightweight and scalable screenings that are applicable to the entire population of software components albeit there might be a price to pay.
2020-02-18
Han, Chihye, Yoon, Wonjun, Kwon, Gihyun, Kim, Daeshik, Nam, Seungkyu.  2019.  Representation of White- and Black-Box Adversarial Examples in Deep Neural Networks and Humans: A Functional Magnetic Resonance Imaging Study. 2019 International Joint Conference on Neural Networks (IJCNN). :1–8.

The recent success of brain-inspired deep neural networks (DNNs) in solving complex, high-level visual tasks has led to rising expectations for their potential to match the human visual system. However, DNNs exhibit idiosyncrasies that suggest their visual representation and processing might be substantially different from human vision. One limitation of DNNs is that they are vulnerable to adversarial examples, input images on which subtle, carefully designed noises are added to fool a machine classifier. The robustness of the human visual system against adversarial examples is potentially of great importance as it could uncover a key mechanistic feature that machine vision is yet to incorporate. In this study, we compare the visual representations of white- and black-box adversarial examples in DNNs and humans by leveraging functional magnetic resonance imaging (fMRI). We find a small but significant difference in representation patterns for different (i.e. white- versus black-box) types of adversarial examples for both humans and DNNs. However, human performance on categorical judgment is not degraded by noise regardless of the type unlike DNN. These results suggest that adversarial examples may be differentially represented in the human visual system, but unable to affect the perceptual experience.