Biblio

Found 2688 results

Filters: First Letter Of Last Name is P  [Clear All Filters]
2022-04-01
Ali, Hisham, Papadopoulos, Pavlos, Ahmad, Jawad, Pitropakis, Nikolaos, Jaroucheh, Zakwan, Buchanan, William J..  2021.  Privacy-preserving and Trusted Threat Intelligence Sharing using Distributed Ledgers. 2021 14th International Conference on Security of Information and Networks (SIN). 1:1—6.
Threat information sharing is considered as one of the proactive defensive approaches for enhancing the over-all security of trusted partners. Trusted partner organizations can provide access to past and current cybersecurity threats for reducing the risk of a potential cyberattack—the requirements for threat information sharing range from simplistic sharing of documents to threat intelligence sharing. Therefore, the storage and sharing of highly sensitive threat information raises considerable concerns regarding constructing a secure, trusted threat information exchange infrastructure. Establishing a trusted ecosystem for threat sharing will promote the validity, security, anonymity, scalability, latency efficiency, and traceability of the stored information that protects it from unauthorized disclosure. This paper proposes a system that ensures the security principles mentioned above by utilizing a distributed ledger technology that provides secure decentralized operations through smart contracts and provides a privacy-preserving ecosystem for threat information storage and sharing regarding the MITRE ATT&CK framework.
2022-07-01
Pan, Conglin, Chen, Si, Wu, Wei, Qian, Jiachuan, Wang, Lijun.  2021.  Research on Space-Time Block Code Technology in MIMO System. 2021 7th International Conference on Computer and Communications (ICCC). :1875—1879.
MIMO technology has been widely used in the telecommunication systems nowadays, and the space-time coding is a key part of MIMO technology. A good coding scheme can exploit the spatial diversity to correct the error which is generated in transmission, and increase the normalized transfer rate with low decoding complexity. On the Basis of the research on different Space-Time Block Codes, this essay proposes a new STBC, Diagonal Block Orthogonal Space-Time Block Code. Then we will compare it with other STBCs in the performance of bit error rate, transfer rate, decoding complexity and peek-to-average power ratio, the final result will prove the superiority of DBOAST.
2022-09-09
Liao, Han-Teng, Pan, Chung-Lien.  2021.  The Role of Resilience and Human Rights in the Green and Digital Transformation of Supply Chain. 2021 IEEE 2nd International Conference on Technology, Engineering, Management for Societal impact using Marketing, Entrepreneurship and Talent (TEMSMET). :1—7.
To make supply chains sustainable and smart, companies can use information and communication technologies to manage procurement, sourcing, conversion, logistics, and customer relationship management activities. Characterized by profit, people, and planet, the supply chain processes of creating values and managing risks are expected to be digitally transformed. Once digitized, datafied, and networked, supply chains can account for substantial progress towards sustainability. Given the lack of clarity on the concepts of resilience and human rights for the supply chain, especially with the recent advancement of social media, big data, artificial intelligence, and cloud computing, the study conducts a scoping review. To identify the size, scope, and themes, it collected 180 articles from the Web of Science bibliographic database. The bibliometric findings reveal the overall conceptual and intellectual structure, and the gaps for further research and development. The concept of resilience can be enriched, for instance, by the environmental, social, and governance (ESG) concerns. The enriched notion of resilience can also be expressed in digitized, datafied, and networked forms.
2022-04-12
Shams, Montasir, Pavia, Sophie, Khan, Rituparna, Pyayt, Anna, Gubanov, Michael.  2021.  Towards Unveiling Dark Web Structured Data. 2021 IEEE International Conference on Big Data (Big Data). :5275—5282.
Anecdotal evidence suggests that Web-search engines, together with the Knowledge Graphs and Bases, such as YAGO [46], DBPedia [13], Freebase [16], Google Knowledge Graph [52] provide rapid access to most structured information on the Web. However, taking a closer look reveals a so called "knowledge gap" [18] that is largely in the dark. For example, a person searching for a relevant job opening has to spend at least 3 hours per week for several months [2] just searching job postings on numerous online job-search engines and the employer websites. The reason why this seemingly simple task cannot be completed by typing in a few keyword queries into a search-engine and getting all relevant results in seconds instead of hours is because access to structured data on the Web is still rudimentary. While searching for a job we have many parameters in mind, not just the job title, but also, usually location, salary range, remote work option, given a recent shift to hybrid work places, and many others. Ideally, we would like to write a SQL-style query, selecting all job postings satisfying our requirements, but it is currently impossible, because job postings (and all other) Web tables are structured in many different ways and scattered all over the Web. There is neither a Web-scale generalizable algorithm nor a system to locate and normalize all relevant tables in a category of interest from millions of sources.Here we describe and evaluate on a corpus having hundreds of millions of Web tables [39], a new scalable iterative training data generation algorithm, producing high quality training data required to train Deep- and Machine-learning models, capable of generalizing to Web scale. The models, trained on such en-riched training data efficiently deal with Web scale heterogeneity compared to poor generalization performance of models, trained without enrichment [20], [25], [38]. Such models are instrumental in bridging the knowledge gap for structured data on the Web.
2022-09-29
Rohan, Rohani, Funilkul, Suree, Pal, Debajyoti, Chutimaskul, Wichian.  2021.  Understanding of Human Factors in Cybersecurity: A Systematic Literature Review. 2021 International Conference on Computational Performance Evaluation (ComPE). :133–140.
Cybersecurity is paramount for all public and private sectors for protecting their information systems, data, and digital assets from cyber-attacks; thus, relying on technology-based protections alone will not achieve this goal. This work examines the role of human factors in cybersecurity by looking at the top-tier conference on Human Factors in Cybersecurity over the past 6 years. A total of 24 articles were selected for the final analysis. Findings show that most of the authors used a quantitative method, where survey was the most used tool for collecting the data, and less attention has been paid to the theoretical research. Besides, three types of users were identified: university-level users, organizational-level users, and unspecified users. Culture is another less investigated aspect, and the samples were biased towards the western community. Moreover, 17 human factors are identified; human awareness, privacy perception, trust perception, behavior, and capability are the top five among them. Also, new insights and recommendations are presented.
2022-06-10
Poon, Lex, Farshidi, Siamak, Li, Na, Zhao, Zhiming.  2021.  Unsupervised Anomaly Detection in Data Quality Control. 2021 IEEE International Conference on Big Data (Big Data). :2327–2336.
Data is one of the most valuable assets of an organization and has a tremendous impact on its long-term success and decision-making processes. Typically, organizational data error and outlier detection processes perform manually and reactively, making them time-consuming and prone to human errors. Additionally, rich data types, unlabeled data, and increased volume have made such data more complex. Accordingly, an automated anomaly detection approach is required to improve data management and quality control processes. This study introduces an unsupervised anomaly detection approach based on models comparison, consensus learning, and a combination of rules of thumb with iterative hyper-parameter tuning to increase data quality. Furthermore, a domain expert is considered a human in the loop to evaluate and check the data quality and to judge the output of the unsupervised model. An experiment has been conducted to assess the proposed approach in the context of a case study. The experiment results confirm that the proposed approach can improve the quality of organizational data and facilitate anomaly detection processes.
2022-03-25
Kumar, Sandeep A., Chand, Kunal, Paea, Lata I., Thakur, Imanuel, Vatikani, Maria.  2021.  Herding Predators Using Swarm Intelligence. 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE). :1—6.

Swarm intelligence, a nature-inspired concept that includes multiplicity, stochasticity, randomness, and messiness is emergent in most real-life problem-solving. The concept of swarming can be integrated with herding predators in an ecological system. This paper presents the development of stabilizing velocity-based controllers for a Lagrangian swarm of \$nın \textbackslashtextbackslashmathbbN\$ individuals, which are supposed to capture a moving target (intruder). The controllers are developed from a Lyapunov function, total potentials, designed via Lyapunov-based control scheme (LbCS) falling under the classical approach of artificial potential fields method. The interplay of the three central pillars of LbCS, which are safety, shortness, and smoothest course for motion planning, results in cost and time effectiveness and efficiency of velocity controllers. Computer simulations illustrate the effectiveness of control laws.

2022-04-01
Dinh, Phuc Trinh, Park, Minho.  2021.  BDF-SDN: A Big Data Framework for DDoS Attack Detection in Large-Scale SDN-Based Cloud. 2021 IEEE Conference on Dependable and Secure Computing (DSC). :1–8.
Software-defined networking (SDN) nowadays is extensively being used in a variety of practical settings, provides a new way to manage networks by separating the data plane from its control plane. However, SDN is particularly vulnerable to Distributed Denial of Service (DDoS) attacks because of its centralized control logic. Many studies have been proposed to tackle DDoS attacks in an SDN design using machine-learning-based schemes; however, these feature-based detection schemes are highly resource-intensive and they are unable to perform reliably in such a large-scale SDN network where a massive amount of traffic data is generated from both control and data planes. This can deplete computing resources, degrade network performance, or even shut down the network systems owing to being exhausting resources. To address the above challenges, this paper proposes a big data framework to overcome traditional data processing limitations and to exploit distributed resources effectively for the most compute-intensive tasks such as DDoS attack detection using machine learning techniques, etc. We demonstrate the robustness, scalability, and effectiveness of our framework through practical experiments.
2022-04-13
Mishra, Anupama, Gupta, B. B., Peraković, Dragan, Peñalvo, Francisco José García, Hsu, Ching-Hsien.  2021.  Classification Based Machine Learning for Detection of DDoS attack in Cloud Computing. 2021 IEEE International Conference on Consumer Electronics (ICCE). :1—4.
Distributed Denial of service attack(DDoS)is a network security attack and now the attackers intruded into almost every technology such as cloud computing, IoT, and edge computing to make themselves stronger. As per the behaviour of DDoS, all the available resources like memory, cpu or may be the entire network are consumed by the attacker in order to shutdown the victim`s machine or server. Though, the plenty of defensive mechanism are proposed, but they are not efficient as the attackers get themselves trained by the newly available automated attacking tools. Therefore, we proposed a classification based machine learning approach for detection of DDoS attack in cloud computing. With the help of three classification machine learning algorithms K Nearest Neighbor, Random Forest and Naive Bayes, the mechanism can detect a DDoS attack with the accuracy of 99.76%.
2022-05-05
Singh, Praneet, P, Jishnu Jaykumar, Pankaj, Akhil, Mitra, Reshmi.  2021.  Edge-Detect: Edge-Centric Network Intrusion Detection using Deep Neural Network. 2021 IEEE 18th Annual Consumer Communications Networking Conference (CCNC). :1—6.
Edge nodes are crucial for detection against multitudes of cyber attacks on Internet-of-Things endpoints and is set to become part of a multi-billion industry. The resource constraints in this novel network infrastructure tier constricts the deployment of existing Network Intrusion Detection System with Deep Learning models (DLM). We address this issue by developing a novel light, fast and accurate `Edge-Detect' model, which detects Distributed Denial of Service attack on edge nodes using DLM techniques. Our model can work within resource restrictions i.e. low power, memory and processing capabilities, to produce accurate results at a meaningful pace. It is built by creating layers of Long Short-Term Memory or Gated Recurrent Unit based cells, which are known for their excellent representation of sequential data. We designed a practical data science pipeline with Recurring Neural Network to learn from the network packet behavior in order to identify whether it is normal or attack-oriented. The model evaluation is from deployment on actual edge node represented by Raspberry Pi using current cybersecurity dataset (UNSW2015). Our results demonstrate that in comparison to conventional DLM techniques, our model maintains a high testing accuracy of 99% even with lower resource utilization in terms of cpu and memory. In addition, it is nearly 3 times smaller in size than the state-of-art model and yet requires a much lower testing time.
2022-02-07
Priyadarshan, Pradosh, Sarangi, Prateek, Rath, Adyasha, Panda, Ganapati.  2021.  Machine Learning Based Improved Malware Detection Schemes. 2021 11th International Conference on Cloud Computing, Data Science Engineering (Confluence). :925–931.
In recent years, cyber security has become a challenging task to protect the networks and computing systems from various types of digital attacks. Therefore, to preserve these systems, various innovative methods have been reported and implemented in practice. However, still more research work needs to be carried out to have malware free computing system. In this paper, an attempt has been made to develop simple but reliable ML based malware detection systems which can be implemented in practice. Keeping this in view, the present paper has proposed and compared the performance of three ML based malware detection systems applicable for computer systems. The proposed methods include k-NN, RF and LR for detection purpose and the features extracted comprise of Byte and ASM. The performance obtained from the simulation study of the proposed schemes has been evaluated in terms of ROC, Log loss plot, accuracy, precision, recall, specificity, sensitivity and F1-score. The analysis of the various results clearly demonstrates that the RF based malware detection scheme outperforms the model based on k-NN and LR The efficiency of detection of proposed ML models is either same or comparable to deep learning-based methods.
2022-07-14
Pagán, Alexander, Elleithy, Khaled.  2021.  A Multi-Layered Defense Approach to Safeguard Against Ransomware. 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC). :0942–0947.
There has been a significant rise in ransomware attacks over the last few years. Cyber attackers have made use of tried and true ransomware viruses to target the government, health care, and educational institutions. Ransomware variants can be purchased on the dark web by amateurs giving them the same attack tools used by professional cyber attackers without experience or skill. Traditional antivirus and antimalware products have improved, but they alone fall short when it comes to catching and stopping ransomware attacks. Employee training has become one of the most important aspects of being prepared for attempted cyberattacks. However, training alone only goes so far; human error is still the main entry point for malware and ransomware infections. In this paper, we propose a multi-layered defense approach to safeguard against ransomware. We have come to the startling realization that it is not a matter of “if” your organization will be hit with ransomware, but “when” your organization will be hit with ransomware. If an organization is not adequately prepared for an attack or how to respond to an attack, the effects can be costly and devastating. Our approach proposes having innovative antimalware software on the local machines, properly configured firewalls, active DNS/Web filtering, email security, backups, and staff training. With the implementation of this layered defense, the attempt can be caught and stopped at multiple points in the event of an attempted ransomware attack. If the attack were successful, the layered defense provides the option for recovery of affected data without paying a ransom.
2022-09-30
Park, Wonhyung, Ahn, GwangHyun.  2021.  A Study on the Next Generation Security Control Model for Cyber Threat Detection in the Internet of Things (IoT) Environment. 2021 21st ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Winter). :213–217.
Recently, information leakage accidents have been continuously occurring due to cyberattacks, and internal information leakage has also been occurring additionally. In this situation, many hacking accidents and DDoS attacks related to IoT are reported, and cyber threat detection field is expanding. Therefore, in this study, the trend related to the commercialization and generalization of IoT technology and the degree of standardization of IoT have been analyzed. Based on the reality of IoT analyzed through this process, research and analysis on what points are required in IoT security control was conducted, and then IoT security control strategy was presented. In this strategy, the IoT environment was divided into IoT device, IoT network/communication, and IoT service/platform in line with the basic strategic framework of 'Pre-response-accident response-post-response', and the strategic direction of security control was established suitable for each of them.
2022-08-04
Pirker, Dominic, Fischer, Thomas, Witschnig, Harald, Steger, Christian.  2021.  velink - A Blockchain-based Shared Mobility Platform for Private and Commercial Vehicles utilizing ERC-721 Tokens. 2021 IEEE 5th International Conference on Cryptography, Security and Privacy (CSP). :62—67.
Transportation of people and goods is important and crucial in the context of smart cities. The trend in regard of people's mobility is moving from privately owned vehicles towards shared mobility. This trend is even stronger in urban areas, where space for parking is limited, and the mobility is supported by the public transport system, which lowers the need for private vehicles. Several challenges and barriers of currently available solutions retard a massive growth of this mobility option, such as the trust problem, data monopolism, or intermediary costs. Decentralizing mobility management is a promising approach to solve the current problems of the mobility market, allowing to move towards a more usable internet of mobility and smart transportation. Leveraging blockchain technology allows to cut intermediary costs, by utilizing smart contracts. Important in this ecosystem is the proof of identity of participants in the blockchain network. To proof the possession of the claimed identity, the private key corresponding to the wallet address is utilized, and therefore essential to protect. In this paper, a blockchain-based shared mobility platform is proposed and a proof-of-concept is shown. First, current problems and state-of-the-art systems are analyzed. Then, a decentralized concept is built based on ERC-721 tokens, implemented in a smart contract, and augmented with a Hardware Security Module (HSM) to protect the confidential key material. Finally, the system is evaluated and compared against state-of-the-art solutions.
2022-04-25
Nguyen, Huy Hoang, Ta, Thi Nhung, Nguyen, Ngoc Cuong, Bui, Van Truong, Pham, Hung Manh, Nguyen, Duc Minh.  2021.  YOLO Based Real-Time Human Detection for Smart Video Surveillance at the Edge. 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE). :439–444.
Recently, smart video surveillance at the edge has become a trend in developing security applications since edge computing enables more image processing tasks to be implemented on the decentralised network note of the surveillance system. As a result, many security applications such as behaviour recognition and prediction, employee safety, perimeter intrusion detection and vandalism deterrence can minimise their latency or even process in real-time when the camera network system is extended to a larger degree. Technically, human detection is a key step in the implementation of these applications. With the advantage of high detection rates, deep learning methods have been widely employed on edge devices in order to detect human objects. However, due to their high computation costs, it is challenging to apply these methods on resource limited edge devices for real-time applications. Inspired by the You Only Look Once (YOLO), residual learning and Spatial Pyramid Pooling (SPP), a novel form of real-time human detection is presented in this paper. Our approach focuses on designing a network structure so that the developed model can achieve a good trade-off between accuracy and processing time. Experimental results show that our trained model can process 2 FPS on Raspberry PI 3B and detect humans with accuracies of 95.05 % and 96.81 % when tested respectively on INRIA and PENN FUDAN datasets. On the human COCO test dataset, our trained model outperforms the performance of the Tiny-YOLO versions. Additionally, compare to the SSD based L-CNN method, our algorithm achieves better accuracy than the other method.
2022-01-10
Paul, Avishek, Islam, Md Rabiul.  2021.  An Artificial Neural Network Based Anomaly Detection Method in CAN Bus Messages in Vehicles. 2021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI). :1–5.

Controller Area Network is the bus standard that works as a central system inside the vehicles for communicating in-vehicle messages. Despite having many advantages, attackers may hack into a car system through CAN bus, take control of it and cause serious damage. For, CAN bus lacks security services like authentication, encryption etc. Therefore, an anomaly detection system must be integrated with CAN bus in vehicles. In this paper, we proposed an Artificial Neural Network based anomaly detection method to identify illicit messages in CAN bus. We trained our model with two types of attacks so that it can efficiently identify the attacks. When tested, the proposed algorithm showed high performance in detecting Denial of Service attacks (with accuracy 100%) and Fuzzy attacks (with accuracy 99.98%).

2022-05-05
Han, Weiheng, Cai, Weiwei, Zhang, Guangjia, Yu, Weiguo, Pan, Junjun, Xiang, Longyun, Ning, Tao.  2021.  Cyclic Verification Method of Security Control System Strategy Table Based on Constraint Conditions and Whole Process Dynamic Simulation. 2021 IEEE/IAS Industrial and Commercial Power System Asia (I CPS Asia). :698—703.

The correctness of security control system strategy is very important to ensure the stability of power system. Aiming at the problem that the current security control strategy verification method is not enough to match the increasingly complex large power grid, this paper proposes a cyclic verification method of security control system strategy table based on constraints and whole process dynamic simulation. Firstly, the method is improved based on the traditional security control strategy model to make the strategy model meet certain generalization ability; And on the basis of this model, the cyclic dynamic verification of the strategy table is realized based on the constraint conditions and the whole process dynamic simulation, which not only ensures the high accuracy of strategy verification for the security control strategy of complex large power grid, but also ensures that the power system is stable and controllable. Finally, based on a certain regional power system, the optimal verification of strategy table verification experiment is realized. The experimental results show that the average processing time of the proposed method is 10.32s, and it can effectively guarantee the controllability and stability of power grid.

2022-03-23
Benito-Picazo, Jesús, Domínguez, Enrique, Palomo, Esteban J., Ramos-Jiménez, Gonzalo, López-Rubio, Ezequiel.  2021.  Deep learning-based anomalous object detection system for panoramic cameras managed by a Jetson TX2 board. 2021 International Joint Conference on Neural Networks (IJCNN). :1–7.
Social conflicts appearing in the media are increasing public awareness about security issues, resulting in a higher demand of more exhaustive environment monitoring methods. Automatic video surveillance systems are a powerful assistance to public and private security agents. Since the arrival of deep learning, object detection and classification systems have experienced a large improvement in both accuracy and versatility. However, deep learning-based object detection and classification systems often require expensive GPU-based hardware to work properly. This paper presents a novel deep learning-based foreground anomalous object detection system for video streams supplied by panoramic cameras, specially designed to build power efficient video surveillance systems. The system optimises the process of searching for anomalous objects through a new potential detection generator managed by three different multivariant homoscedastic distributions. Experimental results obtained after its deployment in a Jetson TX2 board attest the good performance of the system, postulating it as a solvent approach to power saving video surveillance systems.
2022-05-05
Gaikwad, Bipin, Prakash, PVBSS, Karmakar, Abhijit.  2021.  Edge-based real-time face logging system for security applications. 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). :1—6.
In this work, we have proposed a state-of-the-art face logging system that detects and logs high quality cropped face images of the people in real-time for security applications. Multiple strategies based on resolution, velocity and symmetry of faces have been applied to obtain best quality face images. The proposed system handles the issue of motion blur in the face images by determining the velocities of the detections. The output of the system is the face database, where four faces for each detected person are stored along with the time stamp and ID number tagged to it. The facial features are extracted by our system, which are used to search the person-of-interest instantly. The proposed system has been implemented in a docker container environment on two edge devices: the powerful NVIDIA Jetson TX2 and the cheaper NVIDIA Jetson N ano. The light and fast face detector (LFFD) used for detection, and ResN et50 used for facial feature extraction are optimized using TensorRT over these edge devices. In our experiments, the proposed system achieves the True Acceptance Rate (TAR) of 0.94 at False Acceptance Rate (FAR) of 0.01 while detecting the faces at 20–30 FPS on NVIDIA Jetson TX2 and about 8–10 FPS on NVIDIA Jetson N ano device. The advantage of our system is that it is easily deployable at multiple locations and also scalable based on application requirement. Thus it provides a realistic solution to face logging application as the query or suspect can be searched instantly, which may not only help in investigation of incidents but also in prevention of untoward incidents.
2022-02-09
Ranade, Priyanka, Piplai, Aritran, Mittal, Sudip, Joshi, Anupam, Finin, Tim.  2021.  Generating Fake Cyber Threat Intelligence Using Transformer-Based Models. 2021 International Joint Conference on Neural Networks (IJCNN). :1–9.
Cyber-defense systems are being developed to automatically ingest Cyber Threat Intelligence (CTI) that contains semi-structured data and/or text to populate knowledge graphs. A potential risk is that fake CTI can be generated and spread through Open-Source Intelligence (OSINT) communities or on the Web to effect a data poisoning attack on these systems. Adversaries can use fake CTI examples as training input to subvert cyber defense systems, forcing their models to learn incorrect inputs to serve the attackers' malicious needs. In this paper, we show how to automatically generate fake CTI text descriptions using transformers. Given an initial prompt sentence, a public language model like GPT-2 with fine-tuning can generate plausible CTI text that can mislead cyber-defense systems. We use the generated fake CTI text to perform a data poisoning attack on a Cybersecurity Knowledge Graph (CKG) and a cybersecurity corpus. The attack introduced adverse impacts such as returning incorrect reasoning outputs, representation poisoning, and corruption of other dependent AI-based cyber defense systems. We evaluate with traditional approaches and conduct a human evaluation study with cyber-security professionals and threat hunters. Based on the study, professional threat hunters were equally likely to consider our fake generated CTI and authentic CTI as true.
2022-03-09
Park, Byung H., Chattopadhyay, Somrita, Burgin, John.  2021.  Haze Mitigation in High-Resolution Satellite Imagery Using Enhanced Style-Transfer Neural Network and Normalization Across Multiple GPUs. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. :2827—2830.
Despite recent advances in deep learning approaches, haze mitigation in large satellite images is still a challenging problem. Due to amorphous nature of haze, object detection or image segmentation approaches are not applicable. Also it is practically infeasible to obtain ground truths for training. Bounded memory capacity of GPUs is another constraint that limits the size of image to be processed. In this paper, we propose a style transfer based neural network approach to mitigate haze in a large overhead imagery. The network is trained without paired ground truths; further, perception loss is added to restore vivid colors, enhance contrast and minimize artifacts. The paper also illustrates our use of multiple GPUs in a collective way to produce a single coherent clear image where each GPU dehazes different portions of a large hazy image.
2022-08-12
Gepperth, Alexander, Pfülb, Benedikt.  2021.  Image Modeling with Deep Convolutional Gaussian Mixture Models. 2021 International Joint Conference on Neural Networks (IJCNN). :1–9.
In this conceptual work, we present Deep Convolutional Gaussian Mixture Models (DCGMMs): a new formulation of deep hierarchical Gaussian Mixture Models (GMMs) that is particularly suitable for describing and generating images. Vanilla (i.e., flat) GMMs require a very large number of components to describe images well, leading to long training times and memory issues. DCGMMs avoid this by a stacked architecture of multiple GMM layers, linked by convolution and pooling operations. This allows to exploit the compositionality of images in a similar way as deep CNNs do. DCGMMs can be trained end-to-end by Stochastic Gradient Descent. This sets them apart from vanilla GMMs which are trained by Expectation-Maximization, requiring a prior k-means initialization which is infeasible in a layered structure. For generating sharp images with DCGMMs, we introduce a new gradient-based technique for sampling through non-invertible operations like convolution and pooling. Based on the MNIST and FashionMNIST datasets, we validate the DCGMMs model by demonstrating its superiority over flat GMMs for clustering, sampling and outlier detection.
2022-05-06
Palisetti, Sanjana, Chandavarkar, B. R., Gadagkar, Akhilraj V..  2021.  Intrusion Detection of Sinkhole Attack in Underwater Acoustic Sensor Networks. 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). :1—7.
Underwater networks have the potential to allow previously unexplored applications as well as improve our ability to observe and forecast the ocean. Underwater acoustic sensor networks (UASNs) are often deployed in unprecedented and hostile waters and face many security threats. Applications based on UASNs such as coastal defense, pollution monitoring, assisted navigation to name a few, require secure communication. A new set of communication protocols and cooperative coordination algorithms have been proposed to enable collaborative monitoring tasks. However, such protocols overlook security as a key performance indicator. Spoofing, altering, or replaying routing information can affect the entire network, making UASN vulnerable to routing attacks such as selective forwarding, sinkhole attack, Sybil attack, acknowledgement spoofing and HELLO flood attack. The lack of security against such threats is startling if it is observed that security is indeed an important requirement in many emerging civilian and military applications. In this work, the sinkhole attack prevalent among UASNs is looked at and discuss mitigation approaches that can feasibly be implemented in UnetStack3.
2022-01-31
Zulfa, Mulki Indana, Hartanto, Rudy, Permanasari, Adhistya Erna.  2021.  Performance Comparison of Swarm Intelligence Algorithms for Web Caching Strategy. 2021 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT). :45—51.
Web caching is one strategy that can be used to speed up response times by storing frequently accessed data in the cache server. Given the cache server limited capacity, it is necessary to determine the priority of cached data that can enter the cache server. This study simulated cached data prioritization based on an objective function as a characteristic of problem-solving using an optimization approach. The objective function of web caching is formulated based on the variable data size, count access, and frequency-time access. Then we use the knapsack problem method to find the optimal solution. The Simulations run three swarm intelligence algorithms Ant Colony Optimization (ACO), Genetic Algorithm (GA), and Binary Particle Swarm Optimization (BPSO), divided into several scenarios. The simulation results show that the GA algorithm relatively stable and fast to convergence. The ACO algorithm has the advantage of a non-random initial solution but has followed the pheromone trail. The BPSO algorithm is the fastest, but the resulting solution quality is not as good as ACO and GA.
2021-12-22
Panda, Akash Kumar, Kosko, Bart.  2021.  Bayesian Pruned Random Rule Foams for XAI. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
A random rule foam grows and combines several independent fuzzy rule-based systems by randomly sampling input-output data from a trained deep neural classifier. The random rule foam defines an interpretable proxy system for the sampled black-box classifier. The random foam gives the complete Bayesian posterior probabilities over the foam subsystems that contribute to the proxy system's output for a given pattern input. It also gives the Bayesian posterior over the if-then fuzzy rules in each of these constituent foams. The random foam also computes a conditional variance that describes the uncertainty in its predicted output given the random foam's learned rule structure. The mixture structure leads to bootstrap confidence intervals around the output. Using the Bayesian posterior probabilities to prune or discard low-probability sub-foams improves the system's classification accuracy. Simulations used the MNIST image data set of 60,000 gray-scale images of ten hand-written digits. Dropping the lowest-probability foams per input pattern brought the pruned random foam's classification accuracy nearly to that of the neural classifier. Posterior pruning outperformed simple accuracy pruning of a random foam and outperformed a random forest trained on the same neural classifier.