Biblio
Cybersecurity plays a critical role in protecting sensitive information and the structural integrity of networked systems. As networked systems continue to expand in numbers as well as in complexity, so does the threat of malicious activity and the necessity for advanced cybersecurity solutions. Furthermore, both the quantity and quality of available data on malicious content as well as the fact that malicious activity continuously evolves makes automated protection systems for this type of environment particularly challenging. Not only is the data quality a concern, but the volume of the data can be quite small for some of the classes. This creates a class imbalance in the data used to train a classifier; however, many classifiers are not well equipped to deal with class imbalance. One such example is detecting malicious HMTL files from static features. Unfortunately, collecting malicious HMTL files is extremely difficult and can be quite noisy from HTML files being mislabeled. This paper evaluates a specific application that is afflicted by these modern cybersecurity challenges: detection of malicious HTML files. Previous work presented a general framework for malicious HTML file classification that we modify in this work to use a $\chi$2 feature selection technique and synthetic minority oversampling technique (SMOTE). We experiment with different classifiers (i.e., AdaBoost, Gentle-Boost, RobustBoost, RusBoost, and Random Forest) and a pure detection model (i.e., Isolation Forest). We benchmark the different classifiers using SMOTE on a real dataset that contains a limited number of malicious files (40) with respect to the normal files (7,263). It was found that the modified framework performed better than the previous framework's results. However, additional evidence was found to imply that algorithms which train on both the normal and malicious samples are likely overtraining to the malicious distribution. We demonstrate the likely overtraining by determining that a subset of the malicious files, while suspicious, did not come from a malicious source.
The security level is very important in Bluetooth, because the network or devices using secure communication, are susceptible to many attacks against the transmitted data received through eavesdropping. The cryptosystem designers needs to know the complexity of the designed Bluetooth E0. And what the advantages given by any development performed on any known Bluetooth E0Encryption method. The most important criteria can be used in evaluation method is considered as an important aspect. This paper introduce a proposed fuzzy logic technique to evaluate the complexity of Bluetooth E0Encryption system by choosing two parameters, which are entropy and correlation rate, as inputs to proposed fuzzy logic based Evaluator, which can be applied with MATLAB system.
In a scenario where user files are stored in a network shared volume, a single computer infected by ransomware could encrypt the whole set of shared files, with a large impact on user productivity. On the other hand, medium and large companies maintain hardware or software probes that monitor the traffic in critical network links, in order to evaluate service performance, detect security breaches, account for network or service usage, etc. In this paper we suggest using the monitoring capabilities in one of these tools in order to keep a trace of the traffic between the users and the file server. Once the ransomware is detected, the lost files can be recovered from the traffic trace. This includes any user modifications posterior to the last snapshot of periodic backups. The paper explains the problems faced by the monitoring tool, which is neither the client nor the server of the file sharing operations. It also describes the data structures in order to process the actions of users that could be simultaneously working on the same file. A proof of concept software implementation was capable of successfully recovering the files encrypted by 18 different ransomware families.
In order to evaluate the network security risks and implement effective defenses in industrial control system, a risk assessment method for industrial control systems based on attack graphs is proposed. Use the concept of network security elements to translate network attacks into network state migration problems and build an industrial control network attack graph model. In view of the current subjective evaluation of expert experience, the atomic attack probability assignment method and the CVSS evaluation system were introduced to evaluate the security status of the industrial control system. Finally, taking the centralized control system of the thermal power plant as the experimental background, the case analysis is performed. The experimental results show that the method can comprehensively analyze the potential safety hazards in the industrial control system and provide basis for the safety management personnel to take effective defense measures.
Network security and data confidentiality of transmitted information are among the non-functional requirements of industrial wireless sensor networks (IWSNs) in addition to latency, reliability and energy efficiency requirements. Physical layer security techniques are promising solutions to assist cryptographic methods in the presence of an eavesdropper in IWSN setups. In this paper, we propose a physical layer security scheme, which is based on both insertion of an random error vector to forward error correction (FEC) codewords and transmission over decentralized relay nodes. Reed-Solomon and Golay codes are selected as FEC coding schemes and the security performance of the proposed model is evaluated with the aid of decoding error probability of an eavesdropper. The results show that security level is highly based on the location of the eavesdropper and secure communication can be achieved when some of channels between eavesdropper and relay nodes are significantly noisier.
Building memory protection mechanisms into embedded hardware is attractive because it has the potential to neutralize a host of software-based attacks with relatively small performance overhead. A hardware monitor, being at the lowest level of the system stack, is more difficult to bypass than a software monitor and hardware-based protections are also potentially more fine-grained than is possible in software: an individual instruction executing on a processor may entail multiple memory accesses, all of which may be tracked in hardware. Finally, hardware-based protection can be performed without the necessity of altering application binaries. This article presents a proof-of-concept codesign of a small embedded processor with a hardware monitor protecting against ROP-style code reuse attacks. While the case study is small, it indicates, we argue, an approach to rapid-prototyping runtime monitors in hardware that is quick, flexible, and extensible as well as being amenable to formal verification.
Everyday., the DoS/DDoS attacks are increasing all over the world and the ways attackers are using changing continuously. This increase and variety on the attacks are affecting the governments, institutions, organizations and corporations in a bad way. Every successful attack is causing them to lose money and lose reputation in return. This paper presents an introduction to a method which can show what the attack and where the attack based on. This is tried to be achieved with using clustering algorithm DBSCAN on network traffic because of the change and variety in attack vectors.
As cloud services greatly facilitate file sharing online, there's been a growing awareness of the security challenges brought by outsourcing data to a third party. Traditionally, the centralized management of cloud service provider brings about safety issues because the third party is only semi-trusted by clients. Besides, it causes trouble for sharing online data conveniently. In this paper, the blockchain technology is utilized for decentralized safety administration and provide more user-friendly service. Apart from that, Ciphertext-Policy Attribute Based Encryption is introduced as an effective tool to realize fine-grained data access control of the stored files. Meanwhile, the security analysis proves the confidentiality and integrity of the data stored in the cloud server. Finally, we evaluate the performance of computation overhead of our system.
The CPS standard can be more objective to evaluate the effect of control behavior in each control area on the interconnected power grid. The CPS standard is derived from statistical methods emphasizing the long-term control performance of AGC, which is beneficial to the frequency control of the power grid by mutual support between the various power grids in the case of an accident. Moreover, CPS standard reduces the wear of the equipment caused by the frequent adjustment of the AGC unit. The key is to adjust the AGC control strategy to meet the performance of CPS standard. This paper proposed a dynamic optimal CPS control methodology for interconnected power systems based on model predictive control which can achieve optimal control under the premise of meeting the CPS standard. The effectiveness of the control strategy is verified by simulation examples.
Supervisory Control and Data Acquisition (SCADA) systems play a critical role in the operation of large-scale distributed industrial systems. There are many vulnerabilities in SCADA systems and inadvertent events or malicious attacks from outside as well as inside could lead to catastrophic consequences. Network-based intrusion detection is a preferred approach to provide security analysis for SCADA systems due to its less intrusive nature. Data in SCADA network traffic can be generally divided into transport, operation, and content levels. Most existing solutions only focus on monitoring and event detection of one or two levels of data, which is not enough to detect and reason about attacks in all three levels. In this paper, we develop a novel edge-based multi-level anomaly detection framework for SCADA networks named EDMAND. EDMAND monitors all three levels of network traffic data and applies appropriate anomaly detection methods based on the distinct characteristics of data. Alerts are generated, aggregated, prioritized before sent back to control centers. A prototype of the framework is built to evaluate the detection ability and time overhead of it.
Parfait [1] is a static analysis tool originally developed to find implementation defects in C/C++ systems code. Parfait's focus is on proving both high precision (low false positives) as well as scaling to systems with millions of lines of code (typically requiring 10 minutes of analysis time per million lines). Parfait has since been extended to detect security vulnerabilities in applications code, supporting the Java EE and PL/SQL server stack. In this abstract we describe some of the challenges we encountered in this process including some of the differences seen between the applications code being analysed, our solutions that enable us to analyse a variety of applications, and a summary of the challenges that remain.
Monitoring for security and well-being in highly populated areas is a critical issue for city administrators, policy makers and urban planners. As an essential part of many dynamic and critical data-driven tasks, situational awareness (SAW) provides decision-makers a deeper insight of the meaning of urban surveillance. Thus, surveillance measures are increasingly needed. However, traditional surveillance platforms are not scalable when more cameras are added to the network. In this work, a smart surveillance as an edge service has been proposed. To accomplish the object detection, identification, and tracking tasks at the edge-fog layers, two novel lightweight algorithms are proposed for detection and tracking respectively. A prototype has been built to validate the feasibility of the idea, and the test results are very encouraging.
Mobile two-factor authentication (2FA) has become commonplace along with the popularity of mobile devices. Current mobile 2FA solutions all require some form of user effort which may seriously affect the experience of mobile users, especially senior citizens or those with disability such as visually impaired users. In this paper, we propose Proximity-Proof, a secure and usable mobile 2FA system without involving user interactions. Proximity-Proof automatically transmits a user's 2FA response via inaudible OFDM-modulated acoustic signals to the login browser. We propose a novel technique to extract individual speaker and microphone fingerprints of a mobile device to defend against the powerful man-in-the-middle (MiM) attack. In addition, Proximity-Proof explores two-way acoustic ranging to thwart the co-located attack. To the best of our knowledge, Proximity-Proof is the first mobile 2FA scheme resilient to the MiM and co-located attacks. We empirically analyze that Proximity-Proof is at least as secure as existing mobile 2FA solutions while being highly usable. We also prototype Proximity-Proof and confirm its high security, usability, and efficiency through comprehensive user experiments.
While existing research has explored the trade-off between security and performance, these efforts primarily focus on software consumers and often overlook the effectiveness and productivity of software producers. In this paper, we highlight an established security practice, air-gap isolation, and some challenges it uniquely instigates. To better understand and start quantifying the impacts of air-gap isolation on software development productivity, we conducted a survey at a commercial software company: Analytical Graphics, Inc. Based on our insights of dealing with air-gap isolation daily, we suggest some possible directions for future research. Our goal is to bring attention to this neglected area of research and to start a discussion in the SE community about the struggles faced by many commercial and governmental organizations.
This paper aims to explain static analysis techniques in detail, and to highlight the weaknesses and challenges which face it. To this end, more than 80 static analysis-based framework have been studied, and in their light, the process of detecting malicious applications has been divided into four phases that were explained in a schematic manner. Also, the features that is used in static analysis were discussed in detail by dividing it into four categories namely, Manifest-based features, code-based features, semantic features and app's metadata-based features. Also, the challenges facing methods based on static analysis were discussed in detail. Finally, a case study was conducted to test the strength of some known commercial antivirus and one of the stat-of-art academic static analysis frameworks against obfuscation techniques used by developers of malicious applications. The results showed a significant impact on the performance of the most tested antiviruses and frameworks, which is reflecting the urgent need for more accurately tools.
Security has always been a major issue in cloud. Data sources are the most valuable and vulnerable information which is aimed by attackers to steal. If data is lost, then the privacy and security of every cloud user are compromised. Even though a cloud network is secured externally, the threat of an internal attacker exists. Internal attackers compromise a vulnerable user node and get access to a system. They are connected to the cloud network internally and launch attacks pretending to be trusted users. Machine learning approaches are widely used for cloud security issues. The existing machine learning based security approaches classify a node as a misbehaving node based on short-term behavioral data. These systems do not differentiate whether a misbehaving node is a malicious node or a broken node. To address this problem, this paper proposes an Improvised Long Short-Term Memory (ILSTM) model which learns the behavior of a user and automatically trains itself and stores the behavioral data. The model can easily classify the user behavior as normal or abnormal. The proposed ILSTM not only identifies an anomaly node but also finds whether a misbehaving node is a broken node or a new user node or a compromised node using the calculated trust factor. The proposed model not only detects the attack accurately but also reduces the false alarm in the cloud network.