Biblio
With the increasing number of catastrophic weather events and resulting disruption in the energy supply to essential loads, the distribution grid operators’ focus has shifted from reliability to resiliency against high impact, low-frequency events. Given the enhanced automation to enable the smarter grid, there are several assets/resources at the disposal of electric utilities to enhances resiliency. However, with a lack of comprehensive resilience tools for informed operational decisions and planning, utilities face a challenge in investing and prioritizing operational control actions for resiliency. The distribution system resilience is also highly dependent on system attributes, including network, control, generating resources, location of loads and resources, as well as the progression of an extreme event. In this work, we present a novel multi-stage resilience measure called the Anticipate-Withstand-Recover (AWR) metrics. The AWR metrics are based on integrating relevant ‘system characteristics based factors’, before, during, and after the extreme event. The developed methodology utilizes a pragmatic and flexible approach by adopting concepts from the national emergency preparedness paradigm, proactive and reactive controls of grid assets, graph theory with system and component constraints, and multi-criteria decision-making process. The proposed metrics are applied to provide decision support for a) the operational resilience and b) planning investments, and validated for a real system in Alaska during the entirety of the event progression.
Cancelable biometric is a new era of technology that deals with the protection of the privacy content of a person which itself helps in protecting the identity of a person. Here the biometric information instead of being stored directly on the authentication database is transformed into a non-invertible coded format that will be utilized for providing access. The conversion into an encrypted code requires the provision of an encryption key from the user side. Both invertible and non-invertible coding techniques are there but non-invertible one provides additional security to the user. In this paper, a non-invertible cancelable biometric method has been proposed where the biometric image information is canceled and encoded into a code using a user-provided encryption key. This code is generated from the image histogram after continuous bin updation to the maximal value and then it is encrypted by the Hill cipher. This code is stored on the database instead of biometric information. The technique is applied to a set of retinal information taken from the Indian Diabetic Retinopathy database.
Static analysis tools help to detect common pro-gramming errors but generate a large number of false positives. Moreover, when applied to evolving software systems, around 95 % of alarms generated on a version are repeated, i.e., they have also been generated on the previous version. Version-aware static analysis techniques (VSATs) have been proposed to suppress the repeated alarms that are not impacted by the code changes between the two versions. The alarms reported by VSATs after the suppression, called delta alarms, still constitute 63% of the tool-generated alarms. We observe that delta alarms can be further postprocessed using their corresponding code changes: the code changes due to which VSATs identify them as delta alarms. However, none of the existing VSATs or alarms postprocessing techniques postprocesses delta alarms using the corresponding code changes. Based on this observation, we use the code changes to classify delta alarms into six classes that have different priorities assigned to them. The assignment of priorities is based on the type of code changes and their likelihood of actually impacting the delta alarms. The ranking of alarms, obtained by prioritizing the classes, can help suppress alarms that are ranked lower, when resources to inspect all the tool-generated alarms are limited. We performed an empirical evaluation using 9789 alarms generated on 59 versions of seven open source C applications. The evaluation results indicate that the proposed classification and ranking of delta alarms help to identify, on average, 53 % of delta alarms as more likely to be false positives than the others.
Concurrency vulnerabilities caused by synchronization problems will occur in the execution of multi-threaded programs, and the emergence of concurrency vulnerabilities often cause great threats to the system. Once the concurrency vulnerabilities are exploited, the system will suffer various attacks, seriously affecting its availability, confidentiality and security. In this paper, we extract 839 concurrency vulnerabilities from Common Vulnerabilities and Exposures (CVE), and conduct a comprehensive analysis of the trend, classifications, causes, severity, and impact. Finally, we obtained some findings: 1) From 1999 to 2021, the number of concurrency vulnerabilities disclosures show an overall upward trend. 2) In the distribution of concurrency vulnerability, race condition accounts for the largest proportion. 3) The overall severity of concurrency vulnerabilities is medium risk. 4) The number of concurrency vulnerabilities that can be exploited for local access and network access is almost equal, and nearly half of the concurrency vulnerabilities (377/839) can be accessed remotely. 5) The access complexity of 571 concurrency vulnerabilities is medium, and the number of concurrency vulnerabilities with high or low access complexity is almost equal. The results obtained through the empirical study can provide more support and guidance for research in the field of concurrency vulnerabilities.
ISSN: 2693-9177
Researchers have investigated the dark web for various purposes and with various approaches. Most of the dark web data investigation focused on analysing text collected from HTML pages of websites hosted on the dark web. In addition, researchers have documented work on dark web image data analysis for a specific domain, such as identifying and analyzing Child Sexual Abusive Material (CSAM) on the dark web. However, image data from dark web marketplace postings and forums could also be helpful in forensic analysis of the dark web investigation.The presented work attempts to conduct image classification on classes other than CSAM. Nevertheless, manually scanning thousands of websites from the dark web for visual evidence of criminal activity is time and resource intensive. Therefore, the proposed work presented the use of quantum computing to classify the images using a Quantum Convolutional Neural Network (QCNN). Authors classified dark web images into four categories alcohol, drugs, devices, and cards. The provided dataset used for work discussed in the paper consists of around 1242 images. The image dataset combines an open source dataset and data collected by authors. The paper discussed the implementation of QCNN and offered related performance measures.
This paper deals with the problem of image forgery detection because of the problems it causes. Where The Fake im-ages can lead to social problems, for example, misleading the public opinion on political or religious personages, de-faming celebrities and people, and Presenting them in a law court as evidence, may Doing mislead the court. This work proposes a deep learning approach based on Deep CNN (Convolutional Neural Network) Architecture, to detect fake images. The network is based on a modified structure of Xception net, CNN based on depthwise separable convolution layers. After extracting the feature maps, pooling layers are used with dense connection with Xception output, to in-crease feature maps. Inspired by the idea of a densenet network. On the other hand, the work uses the YCbCr color system for images, which gave better Accuracy of %99.93, more than RGB, HSV, and Lab or other color systems.
ISSN: 2831-753X
Server-side web applications are vulnerable to request races. While some previous studies of real-world request races exist, they primarily focus on the root cause of these bugs. To better combat request races in server-side web applications, we need a deep understanding of their characteristics. In this paper, we provide a complementary focus on race effects and fixes with an enlarged set of request races from web applications developed with Object-Relational Mapping (ORM) frameworks. We revisit characterization questions used in previous studies on newly included request races, distinguish the external and internal effects of request races, and relate requestrace fixes with concurrency control mechanisms in languages and frameworks for developing server-side web applications. Our study reveals that: (1) request races from ORM-based web applications share the same characteristics as those from raw-SQL web applications; (2) request races violating application semantics without explicit crashes and error messages externally are common, and latent request races, which only corrupt some shared resource internally but require extra requests to expose the misbehavior, are also common; and (3) various fix strategies other than using synchronization mechanisms are used to fix request races. We expect that our results can help developers better understand request races and guide the design and development of tools for combating request races.
ISSN: 2574-3864
Distributed Denial of Service (DDoS) attacks aim to make a server unresponsive by flooding the target server with a large volume of packets (Volume based DDoS attacks), by keeping connections open for a long time and exhausting the resources (Low and Slow DDoS attacks) or by targeting protocols (Protocol based attacks). Volume based DDoS attacks that flood the target server with a large number of packets are easier to detect because of the abnormality in packet flow. Low and Slow DDoS attacks, however, make the server unavailable by keeping connections open for a long time, but send traffic similar to genuine traffic, making detection of such attacks difficult. This paper proposes a solution to detect and mitigate one such Low and slow DDoS attack, Slowloris in an SDN (Software Defined Networking) environment. The proposed solution involves communication between the detection and mitigation module and the controller of the Software Defined Network to get data to detect and mitigate low and slow DDoS attack.