Biblio
Real-time situational awareness (SA) plays an essential role in accurate and timely incident response. Maintaining SA is, however, extremely costly due to excessive false alerts generated by intrusion detection systems, which require prioritization and manual investigation by security analysts. In this paper, we propose a novel approach to prioritizing alerts so as to maximize SA, by formulating the problem as that of active learning in a hidden Markov model (HMM). We propose to use the entropy of the belief of the security state as a proxy for the mean squared error (MSE) of the belief, and we develop two computationally tractable policies for choosing alerts to investigate that minimize the entropy, taking into account the potential uncertainty of the investigations' results. We use simulations to compare our policies to a variety of baseline policies. We find that our policies reduce the MSE of the belief of the security state by up to 50% compared to static baseline policies, and they are robust to high false alert rates and to the investigation errors.
Cyber-Physical Systems (CPSs), a class of complex intelligent systems, are considered the backbone of Industry 4.0. They aim to achieve large-scale, networked control of dynamical systems and processes such as electricity and gas distribution networks and deliver pervasive information services by combining state-of-the-art computing, communication, and control technologies. However, CPSs are often highly nonlinear and uncertain, and their intrinsic reliance on open communication platforms increases their vulnerability to security threats, which entails additional challenges to conventional control design approaches. Indeed, sensor measurements and control command signals, whose integrity plays a critical role in correct controller design, may be interrupted or falsely modified when broadcasted on wireless communication channels due to cyber attacks. This can have a catastrophic impact on CPS performance. In this paper, we first conduct a thorough analysis of recently developed secure and resilient control approaches leveraging the solid foundations of adaptive control theory to achieve security and resilience in networked CPSs against sensor and actuator attacks. Then, we discuss the limitations of current adaptive control strategies and present several future research directions in this field.
Accurate and synchronized timing information is required by power system operators for controlling the grid infrastructure (relays, Phasor Measurement Units (PMUs), etc.) and determining asset positions. Satellite-based global positioning system (GPS) is the primary source of timing information. However, GPS disruptions today (both intentional and unintentional) can significantly compromise the reliability and security of our electric grids. A robust alternate source for accurate timing is critical to serve both as a deterrent against malicious attacks and as a redundant system in enhancing the resilience against extreme events that could disrupt the GPS network. To achieve this, we rely on the highly accurate, terrestrial atomic clock-based network for alternative timing and synchronization. In this paper, we discuss an experimental setup for an alternative timing approach. The data obtained from this experimental setup is continuously monitored and analyzed using various time deviation metrics. We also use these metrics to compute deviations of our clock with respect to the National Institute of Standards and Technologys (NIST) GPS data. The results obtained from these metric computations are elaborately discussed. Finally, we discuss the integration of the procedures involved, like real-time data ingestion, metric computation, and result visualization, in a novel microservices-based architecture for situational awareness.
With the increasing number of catastrophic weather events and resulting disruption in the energy supply to essential loads, the distribution grid operators’ focus has shifted from reliability to resiliency against high impact, low-frequency events. Given the enhanced automation to enable the smarter grid, there are several assets/resources at the disposal of electric utilities to enhances resiliency. However, with a lack of comprehensive resilience tools for informed operational decisions and planning, utilities face a challenge in investing and prioritizing operational control actions for resiliency. The distribution system resilience is also highly dependent on system attributes, including network, control, generating resources, location of loads and resources, as well as the progression of an extreme event. In this work, we present a novel multi-stage resilience measure called the Anticipate-Withstand-Recover (AWR) metrics. The AWR metrics are based on integrating relevant ‘system characteristics based factors’, before, during, and after the extreme event. The developed methodology utilizes a pragmatic and flexible approach by adopting concepts from the national emergency preparedness paradigm, proactive and reactive controls of grid assets, graph theory with system and component constraints, and multi-criteria decision-making process. The proposed metrics are applied to provide decision support for a) the operational resilience and b) planning investments, and validated for a real system in Alaska during the entirety of the event progression.
Bus factor is a metric that identifies how resilient is the project to the sudden engineer turnover. It states the minimal number of engineers that have to be hit by a bus for a project to be stalled. Even though the metric is often discussed in the community, few studies consider its general relevance. Moreover, the existing tools for bus factor estimation focus solely on the data from version control systems, even though there exists other channels for knowledge generation and distribution. With a survey of 269 engineers, we find that the bus factor is perceived as an important problem in collective development, and determine the highest impact channels of knowledge generation and distribution in software development teams. We also propose a multimodal bus factor estimation algorithm that uses data on code reviews and meetings together with the VCS data. We test the algorithm on 13 projects developed at JetBrains and compared its results to the results of the state-of-the-art tool by Avelino et al. against the ground truth collected in a survey of the engineers working on these projects. Our algorithm is slightly better in terms of both predicting the bus factor as well as key developers compared to the results of Avelino et al. Finally, we use the interviews and the surveys to derive a set of best practices to address the bus factor issue and proposals for the possible bus factor assessment tool.
Cancelable biometric is a new era of technology that deals with the protection of the privacy content of a person which itself helps in protecting the identity of a person. Here the biometric information instead of being stored directly on the authentication database is transformed into a non-invertible coded format that will be utilized for providing access. The conversion into an encrypted code requires the provision of an encryption key from the user side. Both invertible and non-invertible coding techniques are there but non-invertible one provides additional security to the user. In this paper, a non-invertible cancelable biometric method has been proposed where the biometric image information is canceled and encoded into a code using a user-provided encryption key. This code is generated from the image histogram after continuous bin updation to the maximal value and then it is encrypted by the Hill cipher. This code is stored on the database instead of biometric information. The technique is applied to a set of retinal information taken from the Indian Diabetic Retinopathy database.
Static analysis tools help to detect common pro-gramming errors but generate a large number of false positives. Moreover, when applied to evolving software systems, around 95 % of alarms generated on a version are repeated, i.e., they have also been generated on the previous version. Version-aware static analysis techniques (VSATs) have been proposed to suppress the repeated alarms that are not impacted by the code changes between the two versions. The alarms reported by VSATs after the suppression, called delta alarms, still constitute 63% of the tool-generated alarms. We observe that delta alarms can be further postprocessed using their corresponding code changes: the code changes due to which VSATs identify them as delta alarms. However, none of the existing VSATs or alarms postprocessing techniques postprocesses delta alarms using the corresponding code changes. Based on this observation, we use the code changes to classify delta alarms into six classes that have different priorities assigned to them. The assignment of priorities is based on the type of code changes and their likelihood of actually impacting the delta alarms. The ranking of alarms, obtained by prioritizing the classes, can help suppress alarms that are ranked lower, when resources to inspect all the tool-generated alarms are limited. We performed an empirical evaluation using 9789 alarms generated on 59 versions of seven open source C applications. The evaluation results indicate that the proposed classification and ranking of delta alarms help to identify, on average, 53 % of delta alarms as more likely to be false positives than the others.
False news has become widespread in the last decade in political, economic, and social dimensions. This has been aided by the deep entrenchment of social media networking in these dimensions. Facebook and Twitter have been known to influence the behavior of people significantly. People rely on news/information posted on their favorite social media sites to make purchase decisions. Also, news posted on mainstream and social media platforms has a significant impact on a particular country’s economic stability and social tranquility. Therefore, there is a need to develop a deceptive system that evaluates the news to avoid the repercussions resulting from the rapid dispersion of fake news on social media platforms and other online platforms. To achieve this, the proposed system uses the preprocessing stage results to assign specific vectors to words. Each vector assigned to a word represents an intrinsic characteristic of the word. The resulting word vectors are then applied to RNN models before proceeding to the LSTM model. The output of the LSTM is used to determine whether the news article/piece is fake or otherwise.
Concurrency vulnerabilities caused by synchronization problems will occur in the execution of multi-threaded programs, and the emergence of concurrency vulnerabilities often cause great threats to the system. Once the concurrency vulnerabilities are exploited, the system will suffer various attacks, seriously affecting its availability, confidentiality and security. In this paper, we extract 839 concurrency vulnerabilities from Common Vulnerabilities and Exposures (CVE), and conduct a comprehensive analysis of the trend, classifications, causes, severity, and impact. Finally, we obtained some findings: 1) From 1999 to 2021, the number of concurrency vulnerabilities disclosures show an overall upward trend. 2) In the distribution of concurrency vulnerability, race condition accounts for the largest proportion. 3) The overall severity of concurrency vulnerabilities is medium risk. 4) The number of concurrency vulnerabilities that can be exploited for local access and network access is almost equal, and nearly half of the concurrency vulnerabilities (377/839) can be accessed remotely. 5) The access complexity of 571 concurrency vulnerabilities is medium, and the number of concurrency vulnerabilities with high or low access complexity is almost equal. The results obtained through the empirical study can provide more support and guidance for research in the field of concurrency vulnerabilities.
ISSN: 2693-9177