Biblio
Accurate and synchronized timing information is required by power system operators for controlling the grid infrastructure (relays, Phasor Measurement Units (PMUs), etc.) and determining asset positions. Satellite-based global positioning system (GPS) is the primary source of timing information. However, GPS disruptions today (both intentional and unintentional) can significantly compromise the reliability and security of our electric grids. A robust alternate source for accurate timing is critical to serve both as a deterrent against malicious attacks and as a redundant system in enhancing the resilience against extreme events that could disrupt the GPS network. To achieve this, we rely on the highly accurate, terrestrial atomic clock-based network for alternative timing and synchronization. In this paper, we discuss an experimental setup for an alternative timing approach. The data obtained from this experimental setup is continuously monitored and analyzed using various time deviation metrics. We also use these metrics to compute deviations of our clock with respect to the National Institute of Standards and Technologys (NIST) GPS data. The results obtained from these metric computations are elaborately discussed. Finally, we discuss the integration of the procedures involved, like real-time data ingestion, metric computation, and result visualization, in a novel microservices-based architecture for situational awareness.
Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.
Distributed Denial of Service (DDoS) attacks aim to make a server unresponsive by flooding the target server with a large volume of packets (Volume based DDoS attacks), by keeping connections open for a long time and exhausting the resources (Low and Slow DDoS attacks) or by targeting protocols (Protocol based attacks). Volume based DDoS attacks that flood the target server with a large number of packets are easier to detect because of the abnormality in packet flow. Low and Slow DDoS attacks, however, make the server unavailable by keeping connections open for a long time, but send traffic similar to genuine traffic, making detection of such attacks difficult. This paper proposes a solution to detect and mitigate one such Low and slow DDoS attack, Slowloris in an SDN (Software Defined Networking) environment. The proposed solution involves communication between the detection and mitigation module and the controller of the Software Defined Network to get data to detect and mitigate low and slow DDoS attack.
Recent approaches have proven the effectiveness of local outlier factor-based outlier detection when applied over traffic flow probability distributions. However, these approaches used distance metrics based on the Bhattacharyya coefficient when calculating probability distribution similarity. Consequently, the limited expressiveness of the Bhattacharyya coefficient restricted the accuracy of the methods. The crucial deficiency of the Bhattacharyya distance metric is its inability to compare distributions with non-overlapping sample spaces over the domain of natural numbers. Traffic flow intensity varies greatly, which results in numerous non-overlapping sample spaces, rendering metrics based on the Bhattacharyya coefficient inappropriate. In this work, we address this issue by exploring alternative distance metrics and showing their applicability in a massive real-life traffic flow data set from 26 vital intersections in The Hague. The results on these data collected from 272 sensors for more than two years show various advantages of the Earth Mover's distance both in effectiveness and efficiency.
Cloud security has become a serious challenge due to increasing number of attacks day-by-day. Intrusion Detection System (IDS) requires an efficient security model for improving security in the cloud. This paper proposes a game theory based model, named as Game Theory Cloud Security Deep Neural Network (GT-CSDNN) for security in cloud. The proposed model works with the Deep Neural Network (DNN) for classification of attack and normal data. The performance of the proposed model is evaluated with CICIDS-2018 dataset. The dataset is normalized and optimal points about normal and attack data are evaluated based on the Improved Whale Algorithm (IWA). The simulation results show that the proposed model exhibits improved performance as compared with existing techniques in terms of accuracy, precision, F-score, area under the curve, False Positive Rate (FPR) and detection rate.
With a record 400Gbps 100-piece-FPGA implementation, we investigate performance of the potential FEC schemes for OIF-800GZR. By comparing the power dissipation and correction threshold at 10−15 BER, we proposed the simplified OFEC for the 800G-ZR FEC.
This paper introduces lronMask, a new versatile verification tool for masking security. lronMask is the first to offer the verification of standard simulation-based security notions in the probing model as well as recent composition and expandability notions in the random probing model. It supports any masking gadgets with linear randomness (e.g. addition, copy and refresh gadgets) as well as quadratic gadgets (e.g. multiplication gadgets) that might include non-linear randomness (e.g. by refreshing their inputs), while providing complete verification results for both types of gadgets. We achieve this complete verifiability by introducing a new algebraic characterization for such quadratic gadgets and exhibiting a complete method to determine the sets of input shares which are necessary and sufficient to perform a perfect simulation of any set of probes. We report various benchmarks which show that lronMask is competitive with state-of-the-art verification tools in the probing model (maskVerif, scVerif, SILVEH, matverif). lronMask is also several orders of magnitude faster than VHAPS -the only previous tool verifying random probing composability and expandability- as well as SILVEH -the only previous tool providing complete verification for quadratic gadgets with nonlinear randomness. Thanks to this completeness and increased performance, we obtain better bounds for the tolerated leakage probability of state-of-the-art random probing secure compilers.
The Network Security and Risk (NSR) management team in an enterprise is responsible for maintaining the network which includes switches, routers, firewalls, controllers, etc. Due to the ever-increasing threat of capitalizing on the vulnerabilities to create cyber-attacks across the globe, a major objective of the NSR team is to keep network infrastructure safe and secure. NSR team ensures this by taking proactive measures of periodic audits of network devices. Further external auditors are engaged in the audit process. Audit information is primarily stored in an internal database of the enterprise. This generic approach could result in a trust deficit during external audits. This paper proposes a method to improve the security and integrity of the audit information by using blockchain technology, which can greatly enhance the trust factor between the auditors and enterprises.
This article discusses a threat and vulnerability analysis model that allows you to fully analyze the requirements related to information security in an organization and document the results of the analysis. The use of this method allows avoiding and preventing unnecessary costs for security measures arising from subjective risk assessment, planning and implementing protection at all stages of the information systems lifecycle, minimizing the time spent by an information security specialist during information system risk assessment procedures by automating this process and reducing the level of errors and professional skills of information security experts. In the initial sections, the common methods of risk analysis and risk assessment software are analyzed and conclusions are drawn based on the results of comparative analysis, calculations are carried out in accordance with the proposed model.