Biblio
Real-time situational awareness (SA) plays an essential role in accurate and timely incident response. Maintaining SA is, however, extremely costly due to excessive false alerts generated by intrusion detection systems, which require prioritization and manual investigation by security analysts. In this paper, we propose a novel approach to prioritizing alerts so as to maximize SA, by formulating the problem as that of active learning in a hidden Markov model (HMM). We propose to use the entropy of the belief of the security state as a proxy for the mean squared error (MSE) of the belief, and we develop two computationally tractable policies for choosing alerts to investigate that minimize the entropy, taking into account the potential uncertainty of the investigations' results. We use simulations to compare our policies to a variety of baseline policies. We find that our policies reduce the MSE of the belief of the security state by up to 50% compared to static baseline policies, and they are robust to high false alert rates and to the investigation errors.
Accurate and synchronized timing information is required by power system operators for controlling the grid infrastructure (relays, Phasor Measurement Units (PMUs), etc.) and determining asset positions. Satellite-based global positioning system (GPS) is the primary source of timing information. However, GPS disruptions today (both intentional and unintentional) can significantly compromise the reliability and security of our electric grids. A robust alternate source for accurate timing is critical to serve both as a deterrent against malicious attacks and as a redundant system in enhancing the resilience against extreme events that could disrupt the GPS network. To achieve this, we rely on the highly accurate, terrestrial atomic clock-based network for alternative timing and synchronization. In this paper, we discuss an experimental setup for an alternative timing approach. The data obtained from this experimental setup is continuously monitored and analyzed using various time deviation metrics. We also use these metrics to compute deviations of our clock with respect to the National Institute of Standards and Technologys (NIST) GPS data. The results obtained from these metric computations are elaborately discussed. Finally, we discuss the integration of the procedures involved, like real-time data ingestion, metric computation, and result visualization, in a novel microservices-based architecture for situational awareness.
Static analysis tools help to detect common pro-gramming errors but generate a large number of false positives. Moreover, when applied to evolving software systems, around 95 % of alarms generated on a version are repeated, i.e., they have also been generated on the previous version. Version-aware static analysis techniques (VSATs) have been proposed to suppress the repeated alarms that are not impacted by the code changes between the two versions. The alarms reported by VSATs after the suppression, called delta alarms, still constitute 63% of the tool-generated alarms. We observe that delta alarms can be further postprocessed using their corresponding code changes: the code changes due to which VSATs identify them as delta alarms. However, none of the existing VSATs or alarms postprocessing techniques postprocesses delta alarms using the corresponding code changes. Based on this observation, we use the code changes to classify delta alarms into six classes that have different priorities assigned to them. The assignment of priorities is based on the type of code changes and their likelihood of actually impacting the delta alarms. The ranking of alarms, obtained by prioritizing the classes, can help suppress alarms that are ranked lower, when resources to inspect all the tool-generated alarms are limited. We performed an empirical evaluation using 9789 alarms generated on 59 versions of seven open source C applications. The evaluation results indicate that the proposed classification and ranking of delta alarms help to identify, on average, 53 % of delta alarms as more likely to be false positives than the others.
Concurrency vulnerabilities caused by synchronization problems will occur in the execution of multi-threaded programs, and the emergence of concurrency vulnerabilities often cause great threats to the system. Once the concurrency vulnerabilities are exploited, the system will suffer various attacks, seriously affecting its availability, confidentiality and security. In this paper, we extract 839 concurrency vulnerabilities from Common Vulnerabilities and Exposures (CVE), and conduct a comprehensive analysis of the trend, classifications, causes, severity, and impact. Finally, we obtained some findings: 1) From 1999 to 2021, the number of concurrency vulnerabilities disclosures show an overall upward trend. 2) In the distribution of concurrency vulnerability, race condition accounts for the largest proportion. 3) The overall severity of concurrency vulnerabilities is medium risk. 4) The number of concurrency vulnerabilities that can be exploited for local access and network access is almost equal, and nearly half of the concurrency vulnerabilities (377/839) can be accessed remotely. 5) The access complexity of 571 concurrency vulnerabilities is medium, and the number of concurrency vulnerabilities with high or low access complexity is almost equal. The results obtained through the empirical study can provide more support and guidance for research in the field of concurrency vulnerabilities.
ISSN: 2693-9177
In our time the rapid growth of internet and digital communications has been required to be protected from illegal users. It is important to secure the information transmitted between the sender and receiver over the communication channels such as the internet, since it is a public environment. Cryptography and Steganography are the most popular techniques used for sending data in secrete way. In this paper, we are proposing a new algorithm that combines both cryptography and steganography in order to increase the level of data security against attackers. In cryptography, we are using affine hill cipher method; while in steganography we are using Hybrid edge detection with LSB to hide the message. Our paper shows how we can use image edges to hide text message. Grayscale images are used for our experiments and a comparison is developed based on using different edge detection operators such as (canny-LoG ) and (Canny-Sobel). Their performance is measured using PSNR (Peak Signal to Noise ratio), MSE (Mean Squared Error) and EC (Embedding Capacity). The results indicate that, using hybrid edge detection (canny- LoG) with LSB for hiding data could provide high embedding capacity than using hybrid edge detection (canny- Sobel) with LSB. We could prove that hiding in the image edge area could preserve the imperceptibility of the Stego-image. This paper has also proved that the secrete message was extracted successfully without any distortion.
Deadlock is one of the critical problems in the message passing interface. At present, most techniques for detecting the MPI deadlock issue rely on exhausting all execution paths of a program, which is extremely inefficient. In addition, with the increasing number of wildcards that receive events and processes, the number of execution paths raises exponentially, further worsening the situation. To alleviate the problem, we propose a deadlock detection approach called SAMPI based on match-sets to avoid exploring execution paths. In this approach, a match detection rule is employed to form the rough match-sets based on Lazy Lamport Clocks Protocol. Then we design three refining algorithms based on the non-overtaking rule and MPI communication mechanism to refine the match-sets. Finally, deadlocks are detected by analyzing the refined match-sets. We performed the experimental evaluation on 15 various programs, and the experimental results show that SAMPI is really efficient in detecting deadlocks in MPI programs, especially in handling programs with many interleavings.
ISSN: 2168-9253
Server-side web applications are vulnerable to request races. While some previous studies of real-world request races exist, they primarily focus on the root cause of these bugs. To better combat request races in server-side web applications, we need a deep understanding of their characteristics. In this paper, we provide a complementary focus on race effects and fixes with an enlarged set of request races from web applications developed with Object-Relational Mapping (ORM) frameworks. We revisit characterization questions used in previous studies on newly included request races, distinguish the external and internal effects of request races, and relate requestrace fixes with concurrency control mechanisms in languages and frameworks for developing server-side web applications. Our study reveals that: (1) request races from ORM-based web applications share the same characteristics as those from raw-SQL web applications; (2) request races violating application semantics without explicit crashes and error messages externally are common, and latent request races, which only corrupt some shared resource internally but require extra requests to expose the misbehavior, are also common; and (3) various fix strategies other than using synchronization mechanisms are used to fix request races. We expect that our results can help developers better understand request races and guide the design and development of tools for combating request races.
ISSN: 2574-3864
Distributed Denial of Service (DDoS) attacks aim to make a server unresponsive by flooding the target server with a large volume of packets (Volume based DDoS attacks), by keeping connections open for a long time and exhausting the resources (Low and Slow DDoS attacks) or by targeting protocols (Protocol based attacks). Volume based DDoS attacks that flood the target server with a large number of packets are easier to detect because of the abnormality in packet flow. Low and Slow DDoS attacks, however, make the server unavailable by keeping connections open for a long time, but send traffic similar to genuine traffic, making detection of such attacks difficult. This paper proposes a solution to detect and mitigate one such Low and slow DDoS attack, Slowloris in an SDN (Software Defined Networking) environment. The proposed solution involves communication between the detection and mitigation module and the controller of the Software Defined Network to get data to detect and mitigate low and slow DDoS attack.
This paper addresses the issues in managing group key among clusters in Mobile Ad hoc Networks (MANETs). With the dynamic movement of the nodes, providing secure communication and managing secret keys in MANET is difficult to achieve. In this paper, we propose a distributed secure broadcast stateless groupkey management framework (DSBS-GKM) for efficient group key management. This scheme combines the benefits of hash function and Lagrange interpolation polynomial in managing MANET nodes. To provide a strong security mechanism, a revocation system that detects and revokes misbehaviour nodes is presented. The simulation results show that the proposed DSBS-GKM scheme attains betterments in terms of rekeying and revocation performance while comparing with other existing key management schemes.
With the advent of massive machine type of communications, security protection becomes more important than ever. Efforts have been made to impose security protection capability to physical-layer signal design, so called physical-layer security (PLS). The purpose of this paper is to evaluate the performance of PLS schemes for a multi-input-multi-output (MIMO) systems with space-time block coding (STBC) under imperfect channel estimation. Three PLS schemes for STBC schemes are modeled and their bit error rate (BER) performances are evaluated under various channel estimation error environments, and their performance characteristics are analyzed.
ISSN: 2163-0771
Many organizations process and store classified data within their computer networks. Owing to the value of data that they hold; such organizations are more vulnerable to targets from adversaries. Accordingly, the sensitive organizations resort to an ‘air-gap’ approach on their networks, to ensure better protection. However, despite the physical and logical isolation, the attackers have successfully manifested their capabilities by compromising such networks; examples of Stuxnet and Agent.btz in view. Such attacks were possible due to the successful manipulation of human beings. It has been observed that to build up such attacks, persistent reconnaissance of the employees, and their data collection often forms the first step. With the rapid integration of social media into our daily lives, the prospects for data-seekers through that platform are higher. The inherent risks and vulnerabilities of social networking sites/apps have cultivated a rich environment for foreign adversaries to cherry-pick personal information and carry out successful profiling of employees assigned with sensitive appointments. With further targeted social engineering techniques against the identified employees and their families, attackers extract more and more relevant data to make an intelligent picture. Finally, all the information is fused to design their further sophisticated attacks against the air-gapped facility for data pilferage. In this regard, the success of the adversaries in harvesting the personal information of the victims largely depends upon the common errors committed by legitimate users while on duty, in transit, and after their retreat. Such errors would keep on repeating unless these are aligned with their underlying human behaviors and weaknesses, and the requisite mitigation framework is worked out.
Cloud security has become a serious challenge due to increasing number of attacks day-by-day. Intrusion Detection System (IDS) requires an efficient security model for improving security in the cloud. This paper proposes a game theory based model, named as Game Theory Cloud Security Deep Neural Network (GT-CSDNN) for security in cloud. The proposed model works with the Deep Neural Network (DNN) for classification of attack and normal data. The performance of the proposed model is evaluated with CICIDS-2018 dataset. The dataset is normalized and optimal points about normal and attack data are evaluated based on the Improved Whale Algorithm (IWA). The simulation results show that the proposed model exhibits improved performance as compared with existing techniques in terms of accuracy, precision, F-score, area under the curve, False Positive Rate (FPR) and detection rate.
Given the COVID-19 pandemic, this paper aims at providing a full-process information system to support the detection of pathogens for a large range of populations, satisfying the requirements of light weight, low cost, high concurrency, high reliability, quick response, and high security. The project includes functional modules such as sample collection, sample transfer, sample reception, laboratory testing, test result inquiry, pandemic analysis, and monitoring. The progress and efficiency of each collection point as well as the status of sample transfer, reception, and laboratory testing are all monitored in real time, in order to support the comprehensive surveillance of the pandemic situation and support the dynamic deployment of pandemic prevention resources in a timely and effective manner. Deployed on a cloud platform, this system can satisfy ultra-high concurrent data collection requirements with 20 million collections per day and a maximum of 5 million collections per hour, due to its advantages of high concurrency, elasticity, security, and manageability. This system has also been widely used in Jiangsu, Shaanxi provinces, for the prevention and control of COVID-19 pandemic. Over 100 million NAT data have been collected nationwide, providing strong informational support for scientific and reasonable formulation and execution of COVID-19 prevention plans.
In the process of crowdsourced testing service, the intellectual property of crowdsourced testing has been faced with problems such as code plagiarism, difficulties in confirming rights and unreliability of data. Blockchain is a decentralized, tamper-proof distributed ledger, which can help solve current problems. This paper proposes an intellectual property right confirmation system oriented to crowdsourced testing services, combined with blockchain, IPFS (Interplanetary file system), digital signature, code similarity detection to realize the confirmation of crowdsourced testing intellectual property. The performance test shows that the system can meet the requirements of normal crowdsourcing business as well as high concurrency situations.
With a record 400Gbps 100-piece-FPGA implementation, we investigate performance of the potential FEC schemes for OIF-800GZR. By comparing the power dissipation and correction threshold at 10−15 BER, we proposed the simplified OFEC for the 800G-ZR FEC.
This paper introduces lronMask, a new versatile verification tool for masking security. lronMask is the first to offer the verification of standard simulation-based security notions in the probing model as well as recent composition and expandability notions in the random probing model. It supports any masking gadgets with linear randomness (e.g. addition, copy and refresh gadgets) as well as quadratic gadgets (e.g. multiplication gadgets) that might include non-linear randomness (e.g. by refreshing their inputs), while providing complete verification results for both types of gadgets. We achieve this complete verifiability by introducing a new algebraic characterization for such quadratic gadgets and exhibiting a complete method to determine the sets of input shares which are necessary and sufficient to perform a perfect simulation of any set of probes. We report various benchmarks which show that lronMask is competitive with state-of-the-art verification tools in the probing model (maskVerif, scVerif, SILVEH, matverif). lronMask is also several orders of magnitude faster than VHAPS -the only previous tool verifying random probing composability and expandability- as well as SILVEH -the only previous tool providing complete verification for quadratic gadgets with nonlinear randomness. Thanks to this completeness and increased performance, we obtain better bounds for the tolerated leakage probability of state-of-the-art random probing secure compilers.
Currently, research on 5G communication is focusing increasingly on communication techniques. The previous studies have primarily focused on the prevention of communications disruption. To date, there has not been sufficient research on network anomaly detection as a countermeasure against on security aspect. 5g network data will be more complex and dynamic, intelligent network anomaly detection is necessary solution for protecting the network infrastructure. However, since the AI-based network anomaly detection is dependent on data, it is difficult to collect the actual labeled data in the industrial field. Also, the performance degradation in the application process to real field may occur because of the domain shift. Therefore, in this paper, we research the intelligent network anomaly detection technique based on domain adaptation (DA) in 5G edge network in order to solve the problem caused by data-driven AI. It allows us to train the models in data-rich domains and apply detection techniques in insufficient amount of data. For Our method will contribute to AI-based network anomaly detection for improving the security for 5G edge network.
We present an online framework for learning and updating security policies in dynamic IT environments. It includes three components: a digital twin of the target system, which continuously collects data and evaluates learned policies; a system identification process, which periodically estimates system models based on the collected data; and a policy learning process that is based on reinforcement learning. To evaluate our framework, we apply it to an intrusion prevention use case that involves a dynamic IT infrastructure. Our results demonstrate that the framework automatically adapts security policies to changes in the IT infrastructure and that it outperforms a state-of-the-art method.
Control room video surveillance is an important source of information for ensuring public safety. To facilitate the process, a Decision-Support System (DSS) designed for the security task force is vital and necessary to take decisions rapidly using a sea of information. In case of mission critical operation, Situational Awareness (SA) which consists of knowing what is going on around you at any given time plays a crucial role across a variety of industries and should be placed at the center of our DSS. In our approach, SA system will take advantage of the human factor thanks to the reinforcement signal whereas previous work on this field focus on improving knowledge level of DSS at first and then, uses the human factor only for decision-making. In this paper, we propose a situational awareness-centric decision-support system framework for mission-critical operations driven by Quality of Experience (QoE). Our idea is inspired by the reinforcement learning feedback process which updates the environment understanding of our DSS. The feedback is injected by a QoE built on user perception. Our approach will allow our DSS to evolve according to the context with an up-to-date SA.
Cyberspace is the fifth largest activity space after land, sea, air and space. Safeguarding Cyberspace Security is a major issue related to national security, national sovereignty and the legitimate rights and interests of the people. With the rapid development of artificial intelligence technology and its application in various fields, cyberspace security is facing new challenges. How to help the network security personnel grasp the security trend at any time, help the network security monitoring personnel respond to the alarm information quickly, and facilitate the tracking and processing of the monitoring personnel. This paper introduces a method of using situational awareness micro application actual combat attack and defense robot to quickly feed back the network attack information to the monitoring personnel, timely report the attack information to the information reporting platform and automatically block the malicious IP.
The power industrial control system is an important part of the national critical Information infrastructure. Its security is related to the national strategic security and has become an important target of cyber attacks. In order to solve the problem that the vulnerability detection technology of power industrial control system cannot meet the requirement of non-destructive, this paper proposes an industrial control vulnerability analysis technology combined with dynamic and static analysis technology. On this basis, an industrial control non-destructive vulnerability detection system is designed, and a simulation verification platform is built to verify the effectiveness of the industrial control non-destructive vulnerability detection system. These provide technical support for the safety protection research of the power industrial control system.
ISSN: 2693-289X
The main intention of edge computing is to improve network performance by storing and computing data at the edge of the network near the end user. However, its rapid development largely ignores security threats in large-scale computing platforms and their capable applications. Therefore, Security and privacy are crucial need for edge computing and edge computing based environment. Security vulnerabilities in edge computing systems lead to security threats affecting edge computing networks. Therefore, there is a basic need for an intrusion detection system (IDS) designed for edge computing to mitigate security attacks. Due to recent attacks, traditional algorithms may not be possibility for edge computing. This article outlines the latest IDS designed for edge computing and focuses on the corresponding methods, functions and mechanisms. This review also provides deep understanding of emerging security attacks in edge computing. This article proves that although the design and implementation of edge computing IDS have been studied previously, the development of efficient, reliable and powerful IDS for edge computing systems is still a crucial task. At the end of the review, the IDS developed will be introduced as a future prospect.