Biblio
Network security policies contain requirements - including system and software features as well as expected and desired actions of human actors. In this paper, we present a framework for evaluation of textual network security policies as requirements documents to identify areas for improvement. Specifically, our framework concentrates on completeness. We use topic modeling coupled with expert evaluation to learn the complete list of important topics that should be addressed in a network security policy. Using these topics as a checklist, we evaluate (students) a collection of network security policies for completeness, i.e., the level of presence of these topics in the text. We developed three methods for topic recognition to identify missing or poorly addressed topics. We examine network security policies and report the results of our analysis: preliminary success of our approach.
A common way to characterize security enforcement mechanisms is based on the time at which they operate. Mechanisms operating before a program's execution are static mechanisms, and mechanisms operating during a program's execution are dynamic mechanisms. This paper introduces a different perspective and classifies mechanisms based on the granularity of program code that they monitor. Classifying mechanisms in this way provides a unified view of security mechanisms and shows that all security mechanisms can be encoded as dynamic mechanisms that operate at different levels of program code granularity. The practicality of the approach is demonstrated through a prototype implementation of a framework for enforcing security policies at various levels of code granularity on Java bytecode applications.
This paper has firstly introduced big data services and cloud computing model based on different process forms, and analyzed the authentication technology and security services of the existing big data to understand their processing characteristics. Operation principles and complexity of the big data services and cloud computing have also been studied, and summary about their suitable environment and pros and cons have been made. Based on the Cloud Computing, the author has put forward the Model of Big Data Cloud Computing based on Extended Subjective Logic (MBDCC-ESL), which has introduced Jφsang's subjective logic to test the data credibility and expanded it to solve the problem of the trustworthiness of big data in the cloud computing environment. Simulation results show that the model works pretty well.
In recent days, cloud computing is one of the emerging fields. It is a platform to maintain the data and privacy of the users. To process and regulate the data with high security, the access control methods are used. The cloud environment always faces several challenges such as robustness, security issues and so on. Conventional methods like Cipher text-Policy Attribute-Based Encryption (CP-ABE) are reflected in providing huge security, but still, the problem exists like the non-existence of attribute revocation and minimum efficient. Hence, this research work particularly on the attribute-based mechanism to maximize efficiency. Initially, an objective coined out in this work is to define the attributes for a set of users. Secondly, the data is to be re-encrypted based on the access policies defined for the particular file. The re-encryption process renders information to the cloud server for verifying the authenticity of the user even though the owner is offline. The main advantage of this work evaluates multiple attributes and allows respective users who possess those attributes to access the data. The result proves that the proposed Data sharing scheme helps for Revocation under a fine-grained attribute structure.
Humans are majorly identified as the weakest link in cybersecurity. Tertiary institution students undergo lot of cybersecurity issues due to their constant Internet exposure, however there is a lack in literature with regards to tertiary institution students' cybersecurity behaviors. This research aimed at linking the factors responsible for tertiary institutions students' cybersecurity behavior, via validated cybersecurity factors, Perceived Vulnerability (PV); Perceived Barriers (PBr); Perceived Severity (PS); Security Self-Efficacy (SSE); Response Efficacy (RE); Cues to Action (CA); Peer Behavior (PBhv); Computer Skills (CS); Internet Skills (IS); Prior Experience with Computer Security Practices (PE); Perceived Benefits (PBnf); Familiarity with Cyber-Threats (FCT), thus exploring the relationship between the factors and the students' Cybersecurity Behaviors (CSB). A cross-sectional online survey was used to gather data from 450 undergraduate and postgraduate students from tertiary institutions within Klang Valley, Malaysia. Correlation Analysis was used to find the relationships existing among the cybersecurity behavioral factors via SPSS version 25. Results indicate that all factors were significantly related to the cybersecurity behaviors of the students apart from Perceived Severity. Practically, the study instigates the need for more cybersecurity training and practices in the tertiary institutions.
Industrial control systems (ICSs) are used in various infrastructures and industrial plants for realizing their control operation and ensuring their safety. Concerns about the cybersecurity of industrial control systems have raised due to the increased number of cyber-attack incidents on critical infrastructures in the light of the advancement in the cyber activity of ICSs. Nevertheless, the operation of the industrial control systems is bind to vital aspects in life, which are safety, economy, and security. This paper presents a semi-supervised, hybrid attack detection approach for industrial control systems by combining Isolation Forest and Convolutional Neural Network (CNN) models. The proposed framework is developed using the normal operational data, and it is composed of a feature extraction model implemented using a One-Dimensional Convolutional Neural Network (1D-CNN) and an isolation forest model for the detection. The two models are trained independently such that the feature extraction model aims to extract useful features from the continuous-time signals that are then used along with the binary actuator signals to train the isolation forest-based detection model. The proposed approach is applied to a down-scaled industrial control system, which is a water treatment plant known as the Secure Water Treatment (SWaT) testbed. The performance of the proposed method is compared with the other works using the same testbed, and it shows an improvement in terms of the detection capability.
The paper considers an expert system that provides an assessment of the state of information security in authorities and organizations of various forms of ownership. The proposed expert system allows to evaluate the state of compliance with the requirements of both organizational and technical measures to ensure the protection of information, as well as the level of compliance with the requirements of the information protection system in general. The expert assessment method is used as a basic method for assessing the state of information protection. The developed expert system provides a significant reduction in routine operations during the audit of information security. The results of the assessment are presented quite clearly and provide an opportunity for the leadership of the authorities and organizations to make informed decisions to further improve the information protection system.
In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.
Ransomware have observed a steady growth over the years with several concerning trends that indicate efficient, targeted attacks against organizations and individuals alike. These opportunistic attackers indiscriminately target both public and private sector entities to maximize gain. In this article, we highlight the criticality of key management in ransomware's cryptosystem in order to facilitate building effective solutions against this threat. We introduce the ransomware kill chain to elucidate the path our adversaries must take to attain their malicious objective. We examine current solutions presented against ransomware in light of this kill chain and specify which constraints on ransomware are being violated by the existing solutions. Finally, we present the notion of memory attacks against ransomware's key management and present our initial experiments with dynamically extracting decryption keys from real-world ransomware. Results of our preliminary research are promising and the extracted keys were successfully deployed in subsequent data decryption.
We present ClearTrack, a system that tracks meta-data for each primitive value in Java programs to detect and nullify a range of vulnerabilities such as integer overflow/underflow and SQL/command injection vulnerabilities. Contributions include new techniques for eliminating false positives associated with benign integer overflows and underflows, new metadata-aware techniques for detecting and nullifying SQL/command command injection attacks, and results from an independent evaluation team. These results show that 1) ClearTrack operates successfully on Java programs comprising hundreds of thousands of lines of code (including instrumented jar files and Java system libraries, the majority of the applications comprise over 3 million lines of code), 2) because of computations such as cryptography and hash table calculations, these applications perform millions of benign integer overflows and underflows, and 3) ClearTrack successfully detects and nullifies all tested integer overflow and underflow and SQL/command injection vulnerabilities in the benchmark applications.
The use of biometrics in security applications may be vulnerable to several challenges of hacking. Thus, the emergence of cancellable biometrics becomes a suitable solution to this problem. This paper presents a one-way cancellable biometric transform that depends on 3D chaotic maps for face and fingerprint encryption. It aims to avoid cloning of original biometrics and allow the templates used by each user in different applications to be variable. The permutations achieved with the chaotic maps guarantee high security of the biometric templates, especially with the 3D implementation of the encryption algorithm. In addition, the paper presents a hardware implementation for this framework. The proposed algorithm also achieves good performance in the presence of low and moderate levels of noise. An experimental version of the proposed cancellable biometric system has been applied on FPGA model. The obtained results achieve a powerful performance of the proposed cancellable biometric system.
We consider different models of malicious multiple access channels, especially for binary adder channel and for A-channel, and show how they can be used for the reformulation of digital fingerprinting coding problems. In particular, we propose a new model of multimedia fingerprinting coding. In the new model, not only zeroes and plus/minus ones but arbitrary coefficients of linear combinations of noise-like signals for forming watermarks (digital fingerprints) can be used. This modification allows dramatically increase the possible number of users with the property that if t or less malicious users create a forge digital fingerprint then a dealer of the system can find all of them with zero-error probability. We show how arisen problems are related to the compressed sensing problem.