Biblio
Algorithms for unsupervised anomaly detection have proven their effectiveness and flexibility, however, first it is necessary to calculate with what ratio a certain class begins to be considered anomalous by the autoencoder. For this reason, we propose to conduct a study of the efficiency of autoencoders depending on the ratio of anomalous and non-anomalous classes. The emergence of high-speed networks in electric power systems creates a tight interaction of cyberinfrastructure with the physical infrastructure and makes the power system susceptible to cyber penetration and attacks. To address this problem, this paper proposes an innovative approach to develop a specification-based intrusion detection framework that leverages available information provided by components in a contemporary power system. An autoencoder is used to encode the causal relations among the available information to create patterns with temporal state transitions, which are used as features in the proposed intrusion detection. This allows the proposed method to detect anomalies and cyber attacks.
With the development of 5G technology and intelligent terminals, the future direction of the Industrial Internet of Things (IIoT) evolution is Pervasive Edge Computing (PEC). In the pervasive edge computing environment, intelligent terminals can perform calculations and data processing. By migrating part of the original cloud computing model's calculations to intelligent terminals, the intelligent terminal can complete model training without uploading local data to a remote server. Pervasive edge computing solves the problem of data islands and is also successfully applied in scenarios such as vehicle interconnection and video surveillance. However, pervasive edge computing is facing great security problems. Suppose the remote server is honest but curious. In that case, it can still design algorithms for the intelligent terminal to execute and infer sensitive content such as their identity data and private pictures through the information returned by the intelligent terminal. In this paper, we research the problem of honest but curious remote servers infringing intelligent terminal privacy and propose a differential privacy collaborative deep learning algorithm in the pervasive edge computing environment. We use a Gaussian mechanism that meets the differential privacy guarantee to add noise on the first layer of the neural network to protect the data of the intelligent terminal and use analytical moments accountant technology to track the cumulative privacy loss. Experiments show that with the Gaussian mechanism, the training data of intelligent terminals can be protected reduction inaccuracy.
Nowadays is becoming trivial to have multiple virtual machines working in parallel on hardware platforms with high processing power. This appropriate cost effective approach can be found at Internet Service Providers, in cloud service providers’ environments, in research and development lab testing environment (for example Universities’ student’s lab), in virtual application for security evaluation and in many other places. In the aforementioned cases, it is often necessary to start and/or stop virtual machines on the fly. In cloud service providers all the creation / tear down actions are triggered by a customer request and cannot be postponed or delayed for later evaluation. When a new virtual machine is created, it is imperative to assign unique IP addresses to all network interfaces and also domain name system DNS records that contain text based data, IP addresses, etc. Even worse, if a virtual machine has to be stopped or torn down, the critical network resources such as IP addresses and DNS records have to be carefully controlled in order to avoid IP addresses conflicts and name resolution problems between an old virtual machine and a newly created virtual machine. This paper proposes a provisioning mechanism to avoid both DNS records and IP addresses conflicts due to human misconfiguration, problems that can cause networking operation service disruptions.
The utilization of "cloud storage services (CSS)", empowering people to store their data in cloud and avoid from maintenance cost and local data storage. Various data integrity auditing (DIA) frameworks are carried out to ensure the quality of data stored in cloud. Mostly, if not all, of current plans, a client requires to utilize his private key (PK) to generate information authenticators for knowing the DIA. Subsequently, the client needs to have hardware token to store his PK and retain a secret phrase to actuate this PK. In this hardware token is misplaced or password is forgotten, the greater part of existing DIA plans would be not able to work. To overcome this challenge, this research work suggests another DIA without "private key storage (PKS)"plan. This research work utilizes biometric information as client's fuzzy private key (FPK) to evade utilizing hardware token. In the meantime, the plan might in any case viably complete the DIA. This research work uses a direct sketch with coding and mistake correction procedures to affirm client identity. Also, this research work plan another mark conspire that helps block less. Verifiability, yet in addition is viable with linear sketch Keywords– Data integrity auditing (DIA), Cloud Computing, Block less Verifiability, fuzzy biometric data, secure cloud storage (SCS), key exposure resilience (KER), Third Party Auditor (TPA), cloud audit server (CAS), cloud storage server (CSS), Provable Data Possession (PDP)
In new technological world pervasive computing plays the important role in data computing and communication. The pervasive computing provides the mobile environment for decentralized computational services at anywhere, anytime at any context and location. Pervasive computing is flexible and makes portable devices and computing surrounded us as part of our daily life. Devices like Laptop, Smartphones, PDAs, and any other portable devices can constitute the pervasive environment. These devices in pervasive environments are worldwide and can receive various communications including audio visual services. The users and the system in this pervasive environment face the challenges of user trust, data privacy and user and device node identity. To give the feasible determination for these challenges. This paper aims to propose a dynamic learning in pervasive computing environment refer the challenges proposed efficient security model (ESM) for trustworthy and untrustworthy attackers. ESM model also compared with existing generic models; it also provides better accuracy rate than existing models.
Structural analysis is the study of finding component functions for a given function. In this paper, we proceed with structural analysis of structures consisting of the S (nonlinear Substitution) layer and the A (Affine or linear) layer. Our main interest is the S1AS2 structure with different substitution layers and large input/output sizes. The purpose of our structural analysis is to find the functionally equivalent oracle F* and its component functions for a given encryption oracle F(= S2 ∘ A ∘ S1). As a result, we can construct the decryption oracle F*−1 explicitly and break the one-wayness of the building blocks used in a White-box implementation. Our attack consists of two steps: S layer recovery using multiset properties and A layer recovery using differential properties. We present the attack algorithm for each step and estimate the time complexity. Finally, we discuss the applicability of S1AS2 structural analysis in a White-box Cryptography environment.
Buffer overflow (BOF) vulnerability is one of the most dangerous security vulnerability which can be exploited by unwanted users. This vulnerability can be detected by both static and dynamic analysis techniques. For dynamic analysis, execution of the program is required in which the behavior of the program according to specifications is checked while in static analysis the source code is analyzed for security vulnerabilities without execution of code. Despite the fact that many open source and commercial security analysis tools employ static and dynamic methods but there is still a margin for improvement in BOF vulnerability detection capability of these tools. We propose an enhancement in Cppcheck tool for statically detecting BOF vulnerability using data flow analysis in C programs. We have used the Juliet Test Suite to test our approach. We selected two best tools cited in the literature for BOF detection (i.e. Frama-C and Splint) to compare the performance and accuracy of our approach. From the experiments, our proposed approach generated Youden Index of 0.45, Frama-C has only 0.1 Youden's score and Splint generated Youden score of -0.47. These results show that our technique performs better as compared to both Frama-C and Splint static analysis tools.
Cloud computing, supported by advancements in virtualisation and distributed computing, became the default options for implementing the IT infrastructure of organisations. Medical data and in particular medical images have increasing storage space and remote access requirements. Cloud computing satisfies these requirements but unclear safeguards on data security can expose sensitive data to possible attacks. Furthermore, recent changes in legislation imposed additional security constraints in technology to ensure the privacy of individuals and the integrity of data when stored in the cloud. In contrast with this trend, current data security methods, based on encryption, create an additional overhead to the performance, and often they are not allowed in public cloud servers. Hence, this paper proposes a mechanism that combines data fragmentation to protect medical images on the public cloud servers, and a NoSQL database to secure an efficient organisation of such data. Results of this paper indicate that the latency of the proposed method is significantly lower if compared with AES, one of the most adopted data encryption mechanisms. Therefore, the proposed method is an optimal trade-off in environments with low latency requirements or limited resources.
Recent technological advancement demands organizations to have measures in place to manage their Information Technology (IT) systems. Enterprise Architecture Frameworks (EAF) offer companies an efficient technique to manage their IT systems aligning their business requirements with effective solutions. As a result, experts have developed multiple EAF's such as TOGAF, Zachman, MoDAF, DoDAF, SABSA to help organizations to achieve their objectives by reducing the costs and complexity. These frameworks however, concentrate mostly on business needs lacking holistic enterprise-wide security practices, which may cause enterprises to be exposed for significant security risks resulting financial loss. This study focuses on evaluating business capabilities in TOGAF, NIST, COBIT, MoDAF, DoDAF, SABSA, and Zachman, and identify essential security requirements in TOGAF, SABSA and COBIT19 frameworks by comparing their resiliency processes, which helps organization to easily select applicable framework. The study shows that; besides business requirements, EAF need to include precise cybersecurity guidelines aligning EA business strategies. Enterprises now need to focus more on building resilient approach, which is beyond of protection, detection and prevention. Now enterprises should be ready to withstand against the cyber-attacks applying relevant cyber resiliency approach improving the way of dealing with impacts of cybersecurity risks.
This project develops a face recognition-based door locking system with two-factor authentication using OpenCV. It uses Raspberry Pi 4 as the microcontroller. Face recognition-based door locking has been around for many years, but most of them only provide face recognition without any added security features, and they are costly. The design of this project is based on human face recognition and the sending of a One-Time Password (OTP) using the Twilio service. It will recognize the person at the front door. Only people who match the faces stored in its dataset and then inputs the correct OTP will have access to unlock the door. The Twilio service and image processing algorithm Local Binary Pattern Histogram (LBPH) has been adopted for this system. Servo motor operates as a mechanism to access the door. Results show that LBPH takes a short time to recognize a face. Additionally, if an unknown face is detected, it will log this instance into a "Fail" file and an accompanying CSV sheet.
Discovering vulnerabilities is an information-intensive task that requires a developer to locate the defects in the code that have security implications. The task is difficult due to the growing code complexity and some developer's lack of security expertise. Although tools have been created to ease the difficulty, no single one is sufficient. In practice, developers often use a combination of tools to uncover vulnerabilities. Yet, the basis on which different tools are composed is under explored. In this paper, we examine the composition base by taking advantage of the tool design patterns informed by foraging theory. We follow a design science methodology and carry out a three-step empirical study: mapping 34 foraging-theoretic patterns in a specific vulnerability discovery tool, formulating hypotheses about the value and cost of foraging when considering two composition scenarios, and performing a human-subject study to test the hypotheses. Our work offers insights into guiding developers' tool usage in detecting software vulnerabilities.
Safety- and security-critical developers have long recognized the importance of applying a high degree of scrutiny to a system’s (or subsystem’s) I/O messages. However, lack of care in the development of message-handling components can lead to an increase, rather than a decrease, in the attack surface. On the DARPA Cyber-Assured Systems Engineering (CASE) program, we have focused our research effort on identifying cyber vulnerabilities early in system development, in particular at the Architecture development phase, and then automatically synthesizing components that mitigate against the identified vulnerabilities from high-level specifications. This approach is highly compatible with the goals of the LangSec community. Advances in formal methods have allowed us to produce hardware/software implementations that are both performant and guaranteed correct. With these tools, we can synthesize high-assurance “building blocks” that can be composed automatically with high confidence to create trustworthy systems, using a method we call Security-Enhancing Architectural Transformations. Our synthesis-focused approach provides a higherleverage insertion point for formal methods than is possible with post facto analytic methods, as the formal methods tools directly contribute to the implementation of the system, without requiring developers to become formal methods experts. Our techniques encompass Systems, Hardware, and Software Development, as well as Hardware/Software Co-Design/CoAssurance. We illustrate our method and tools with an example that implements security-improving transformations on system architectures expressed using the Architecture Analysis and Design Language (AADL). We show how message-handling components can be synthesized from high-level regular or context-free language specifications, as well as a novel specification language for self-describing messages called Contiguity Types, and verified to meet arithmetic constraints extracted from the AADL model. Finally, we guarantee that the intent of the message processing logic is accurately reflected in the application binary code through the use of the verified CakeML compiler, in the case of software, or the Restricted Algorithmic C toolchain with ACL2-based formal verification, in the case of hardware/software co-design.
Trust estimation of vehicles is vital for the correct functioning of Vehicular Ad Hoc Networks (VANETs) as it enhances their security by identifying reliable vehicles. However, accurate trust estimation still remains distant as existing works do not consider all malicious features of vehicles, such as dropping or delaying packets, altering content, and injecting false information. Moreover, data consistency of messages is not guaranteed here as they pass through multiple paths and can easily be altered by malicious relay vehicles. This leads to difficulty in measuring the effect of content tampering in trust calculation. Further, unreliable wireless communication of VANETs and unpredictable vehicle behavior may introduce uncertainty in the trust estimation and hence its accuracy. In this view, we put forward three trust factors - captured by fuzzy sets to adequately model malicious properties of a vehicle and apply a fuzzy logic-based algorithm to estimate its trust. We also introduce a parameter to evaluate the impact of content modification in trust calculation. Experimental results reveal that the proposed scheme detects malicious vehicles with high precision and recall and makes decisions with higher accuracy compared to the state-of-the-art.
Nowadays, video surveillance systems are part of our daily life, because of their role in ensuring the security of goods and people this generates a huge amount of video data. Thus, several research works based on the ontology paradigm have tried to develop an efficient system to index and search precisely a very large volume of videos. Due to their semantic expressiveness, ontologies are undoubtedly very much in demand in recent years in the field of video surveillance to overcome the problem of the semantic gap between the interpretation of the data extracted from the low level and the high-level semantics of the video. Despite its good expressiveness of semantics, a classical ontology may not be sufficient for good handling of uncertainty, which is however commonly present in the video surveillance domain, hence the need to consider a new ontological approach that will better represent uncertainty. Fuzzy logic is recognized as a powerful tool for dealing with vague, incomplete, imperfect, or uncertain data or information. In this work, we develop a new ontological approach based on fuzzy logic. All the relevant fuzzy concepts such as Video\_Objects, Video\_Events, Video\_Sequences, that could appear in a video surveillance domain are well represented with their fuzzy Ontology DataProperty and the fuzzy relations between them (Ontology ObjectProperty). To achieve this goal, the new fuzzy video surveillance ontology is implemented using the fuzzy ontology web language 2 (fuzzy owl2) which is an extension of the standard semantic web language, ontology web language 2 (owl2).
Ransomware is one of the most serious threats which constitute a significant challenge in the cybersecurity field. The cybercriminals use this attack to encrypts the victim's files or infect the victim's devices to demand ransom in exchange to restore access to these files and devices. The escalating threat of Ransomware to thousands of individuals and companies requires an urgent need for creating a system capable of proactively detecting and preventing ransomware. In this research, a new approach is proposed to detect and classify ransomware based on three machine learning algorithms (Random Forest, Support Vector Machines , and Näive Bayes). The features set was extracted directly from raw byte using static analysis technique of samples to improve the detection speed. To offer the best detection accuracy, CF-NCF (Class Frequency - Non-Class Frequency) has been utilized for generate features vectors. The proposed approach can differentiate between ransomware and goodware files with a detection accuracy of up to 98.33 percent.
Digitization has pioneered to drive exceptional changes across all industries in the advancement of analytics, automation, and Artificial Intelligence (AI) and Machine Learning (ML). However, new business requirements associated with the efficiency benefits of digitalization are forcing increased connectivity between IT and OT networks, thereby increasing the attack surface and hence the cyber risk. Cyber threats are on the rise and securing industrial networks are challenging with the shortage of human resource in OT field, with more inclination to IT/OT convergence and the attackers deploy various hi-tech methods to intrude the control systems nowadays. We have developed an innovative real-time ICS cyber test kit to obtain the OT industrial network traffic data with various industrial attack vectors. In this paper, we have introduced the industrial datasets generated from ICS test kit, which incorporate the cyber-physical system of industrial operations. These datasets with a normal baseline along with different industrial hacking scenarios are analyzed for research purposes. Metadata is obtained from Deep packet inspection (DPI) of flow properties of network packets. DPI analysis provides more visibility into the contents of OT traffic based on communication protocols. The advancement in technology has led to the utilization of machine learning/artificial intelligence capability in IDS ICS SCADA. The industrial datasets are pre-processed, profiled and the abnormality is analyzed with DPI. The processed metadata is normalized for the easiness of algorithm analysis and modelled with machine learning-based latest deep learning ensemble LSTM algorithms for anomaly detection. The deep learning approach has been used nowadays for enhanced OT IDS performances.