Biblio
Image encryption takes been used by armies and governments to help top-secret communication. Nowadays, this one is frequently used for guarding info among various civilian systems. To perform secure image encryption by means of various chaotic maps, in such system a legal party may perhaps decrypt the image with the support of encryption key. This reversible chaotic encryption technique makes use of Arnold's cat map, in which pixel shuffling offers mystifying the image pixels based on the number of iterations decided by the authorized image owner. This is followed by other chaotic encryption techniques such as Logistic map and Tent map, which ensures secure image encryption. The simulation result shows the planned system achieves better NPCR, UACI, MSE and PSNR respectively.
The paper presents an example Sensor-cloud architecture that integrates security as its native ingredient. It is based on the multi-layer client-server model with separation of physical and virtual instances of sensors, gateways, application servers and data storage. It proposes the application of virtualised sensor nodes as a prerequisite for increasing security, privacy, reliability and data protection. All main concerns in Sensor-Cloud security are addressed: from secure association, authentication and authorization to privacy and data integrity and protection. The main concept is that securing the virtual instances is easier to implement, manage and audit and the only bottleneck is the physical interaction between real sensor and its virtual reflection.
The problem of analytical synthesis of the reduced order state observer for the bilinear dynamic system with scalar input and vector output has been considered. Formulas for calculation of the matrix coefficients of the nonlinear observer with estimation error asymptotically approaching zero have been obtained. Two modifications of observer dynamic equation have been proposed: the first one requires differentiation of an output signal and the second one does not. Based on the matrix canonization technology, the solvability conditions for the synthesis problem and analytical expressions for an acceptable set of solutions have been received. A precise step-by-step algorithm for calculating the observer coefficients has been offered. An example of the practical use of the developed algorithm has been given.
A mobile ad hoc network is a type of ad hoc network in which node changes it locations and configures them. It uses wireless medium to communicate with other networks. It also does not possess centralized authority and each node has the ability to perform some tasks. Nodes in this type of network has a routing table depending on which it finds the optimal way to send packets in forward direction but link failure should be updated in node table to encompass that. In civilian environment like meeting rooms, cab networking etc, in military search and rescue operations it has huge application.
This paper presents a novel feature learning model for cyber security tasks. We propose to use Auto-encoders (AEs), as a generative model, to learn latent representation of different feature sets. We show how well the AE is capable of automatically learning a reasonable notion of semantic similarity among input features. Specifically, the AE accepts a feature vector, obtained from cyber security phenomena, and extracts a code vector that captures the semantic similarity between the feature vectors. This similarity is embedded in an abstract latent representation. Because the AE is trained in an unsupervised fashion, the main part of this success comes from appropriate original feature set that is used in this paper. It can also provide more discriminative features in contrast to other feature engineering approaches. Furthermore, the scheme can reduce the dimensionality of the features thereby signicantly minimising the memory requirements. We selected two different cyber security tasks: networkbased anomaly intrusion detection and Malware classication. We have analysed the proposed scheme with various classifiers using publicly available datasets for network anomaly intrusion detection and malware classifications. Several appropriate evaluation metrics show improvement compared to prior results.
as data size is growing up, cloud storage is becoming more familiar to store a significant amount of private information. Government and private organizations require transferring plenty of business files from one end to another. However, we will lose privacy if we exchange information without data encryption and communication mechanism security. To protect data from hacking, we can use Asymmetric encryption technique, but it has a key exchange problem. Although Asymmetric key encryption deals with the limitations of Symmetric key encryption it can only encrypt limited size of data which is not feasible for a large amount of data files. In this paper, we propose a probabilistic approach to Pretty Good Privacy technique for encrypting large-size data, named as ``BigCrypt'' where both Symmetric and Asymmetric key encryption are used. Our goal is to achieve zero tolerance security on a significant amount of data encryption. We have experimentally evaluated our technique under three different platforms.
Currently, no major browser fully checks for TLS/SSL certificate revocations. This is largely due to the fact that the deployed mechanisms for disseminating revocations (CRLs, OCSP, OCSP Stapling, CRLSet, and OneCRL) are each either incomplete, insecure, inefficient, slow to update, not private, or some combination thereof. In this paper, we present CRLite, an efficient and easily-deployable system for proactively pushing all TLS certificate revocations to browsers. CRLite servers aggregate revocation information for all known, valid TLS certificates on the web, and store them in a space-efficient filter cascade data structure. Browsers periodically download and use this data to check for revocations of observed certificates in real-time. CRLite does not require any additional trust beyond the existing PKI, and it allows clients to adopt a fail-closed security posture even in the face of network errors or attacks that make revocation information temporarily unavailable. We present a prototype of name that processes TLS certificates gathered by Rapid7, the University of Michigan, and Google's Certificate Transparency on the server-side, with a Firefox extension on the client-side. Comparing CRLite to an idealized browser that performs correct CRL/OCSP checking, we show that CRLite reduces latency and eliminates privacy concerns. Moreover, CRLite has low bandwidth costs: it can represent all certificates with an initial download of 10 MB (less than 1 byte per revocation) followed by daily updates of 580 KB on average. Taken together, our results demonstrate that complete TLS/SSL revocation checking is within reach for all clients.
Feature selection is an important step in data analysis to address the curse of dimensionality. Such dimensionality reduction techniques are particularly important when if a classification is required and the model scales in polynomial time with the size of the feature (e.g., some applications include genomics, life sciences, cyber-security, etc.). Feature selection is the process of finding the minimum subset of features that allows for the maximum predictive power. Many of the state-of-the-art information-theoretic feature selection approaches use a greedy forward search; however, there are concerns with the search in regards to the efficiency and optimality. A unified framework was recently presented for information-theoretic feature selection that tied together many of the works in over the past twenty years. The work showed that joint mutual information maximization (JMI) is generally the best options; however, the complexity of greedy search for JMI scales quadratically and it is infeasible on high dimensional datasets. In this contribution, we propose a fast approximation of JMI based on information theory. Our approach takes advantage of decomposing the calculations within JMI to speed up a typical greedy search. We benchmarked the proposed approach against JMI on several UCI datasets, and we demonstrate that the proposed approach returns feature sets that are highly consistent with JMI, while decreasing the run time required to perform feature selection.
The emergence of general-purpose system-on-chip (SoC) architectures has given rise to a number of significant security challenges. The current trend in SoC design is system-level integration of heterogeneous technologies consisting of a large number of processing elements such as programmable RISC cores, memory, DSPs, and accelerator function units/ASIC. These processing elements may come from different providers, and application executable code may have varying levels of trust. Some of the pressing architecture design questions are: (1) how to implement multi-level user-defined security; (2) how to optimally and securely share resources and data among processing elements. In this work, we develop a secure multicore architecture, named Hermes. It represents a new architectural framework that integrates multiple processing elements (called tenants) of secure and non-secure cores into the same chip design while (a) maintaining individual tenant security, (b) preventing data leakage and corruption, and (c) promoting collaboration among the tenants. The Hermes architecture is based on a programmable secure router interface and a trust-aware routing algorithm. With 17% hardware overhead, it enables the implementation of processing-element-oblivious secure multicore systems with a programmable distributed group key management scheme.
In the era of Big Data, software systems can be affected by its growing complexity, both with respect to functional and non-functional requirements. As more and more people use software applications over the web, the ability to recognize if some of this traffic is malicious or legitimate is a challenge. The traffic load of security controllers, as well as the complexity of security rules to detect attacks can grow to levels where current solutions may not suffice. In this work, we propose a hierarchical distributed architecture for security control in order to partition responsibility and workload among many security controllers. In addition, our architecture proposes a more simplified way of defining security rules to allow security to be enforced on an operational level, rather than a development level.
Internet of things (IOT) is a kind of advanced information technology which has drawn societies' attention. Sensors and stimulators are usually recognized as smart devices of our environment. Simultaneously IOT security brings up new issues. Internet connection and possibility of interaction with smart devices cause those devices to involve more in human life. Therefore, safety is a fundamental requirement in designing IOT. IOT has three remarkable features: overall perception, reliable transmission and intelligent processing. Because of IOT span, security of conveying data is an essential factor for system security. Hybrid encryption technique is a new model that can be used in IOT. This type of encryption generates strong security and low computation. In this paper, we have proposed a hybrid encryption algorithm which has been conducted in order to reduce safety risks and enhancing encryption's speed and less computational complexity. The purpose of this hybrid algorithm is information integrity, confidentiality, non-repudiation in data exchange for IOT. Eventually suggested encryption algorithm has been simulated by MATLAB software and its speed and safety efficiency were evaluated in comparison with conventional encryption algorithm.
In Wyner wiretap II model of communication, Alice and Bob are connected by a channel that can be eavesdropped by an adversary with unlimited computation who can select a fraction of communication to view, and the goal is to provide perfect information theoretic security. Information theoretic security is increasingly important because of the threat of quantum computers that can effectively break algorithms and protocols that are used in today's public key infrastructure. We consider interactive protocols for wiretap II channel with active adversary who can eavesdrop and add adversarial noise to the eavesdropped part of the codeword. These channels capture wireless setting where malicious eavesdroppers at reception distance of the transmitter can eavesdrop the communication and introduce jamming signal to the channel. We derive a new upperbound R ≤ 1 - ρ for the rate of interactive protocols over two-way wiretap II channel with active adversaries, and construct a perfectly secure protocol family with achievable rate 1 - 2ρ + ρ2. This is strictly higher than the rate of the best one round protocol which is 1 - 2ρ, hence showing that interaction improves rate. We also prove that even with interaction, reliable communication is possible only if ρ \textbackslashtextless; 1/2. An interesting aspect of this work is that our bounds will also hold in network setting when two nodes are connected by n paths, a ρ of which is corrupted by the adversary. We discuss our results, give their relations to the other works, and propose directions for future work.
A novel method, consisting of fault detection, rough set generation, element isolation and parameter estimation is presented for multiple-fault diagnosis on analog circuit with tolerance. Firstly, a linear-programming concept is developed to transform fault detection of circuit with limited accessible terminals into measurement to check existence of a feasible solution under tolerance constraints. Secondly, fault characteristic equation is deduced to generate a fault rough set. It is proved that the node voltages of nominal circuit can be used in fault characteristic equation with fault tolerance. Lastly, fault detection of circuit with revised deviation restriction for suspected fault elements is proceeded to locate faulty elements and estimate their parameters. The diagnosis accuracy and parameter identification precision of the method are verified by simulation results.
In the paper, we demonstrate a neuromorphic cognitive computing approach for Network Intrusion Detection System (IDS) for cyber security using Deep Learning (DL). The algorithmic power of DL has been merged with fast and extremely power efficient neuromorphic processors for cyber security. In this implementation, the data has been numerical encoded to train with un-supervised deep learning techniques called Auto Encoder (AE) in the training phase. The generated weights of AE are used as initial weights for the supervised training phase using neural networks. The final weights are converted to discrete values using Discrete Vector Factorization (DVF) for generating crossbar weight, synaptic weights, and thresholds for neurons. Finally, the generated crossbar weights, synaptic weights, threshold, and leak values are mapped to crossbars and neurons. In the testing phase, the encoded test samples are converted to spiking form by using hybrid encoding technique. The model has been deployed and tested on the IBM Neurosynaptic Core Simulator (NSCS) and on actual IBM TrueNorth neurosynaptic chip. The experimental results show around 90.12% accuracy for network intrusion detection for cyber security on the physical neuromorphic chip. Furthermore, we have investigated the proposed system not only for detection of malicious packets but also for classifying specific types of attacks and achieved 81.31% recognition accuracy. The neuromorphic implementation provides incredible detection and classification accuracy for network intrusion detection with extremely low power.
This work presents the proof of concept implementation for the first hardware-based design of Moving Target Defense over IPv6 (MT6D) in full Register Transfer Level (RTL) logic, with future sights on an embedded Application-Specified Integrated Circuit (ASIC) implementation. Contributions are an IEEE 802.3 Ethernet stream-based in-line network packet processor with a specialized Complex Instruction Set Computer (CISC) instruction set architecture, RTL-based Network Time Protocol v4 synchronization, and a modular crypto engine. Traditional static network addressing allows attackers the incredible advantage of taking time to plan and execute attacks against a network. To counter, MT6D provides a network host obfuscation technique that offers network-based keyed access to specific hosts without altering existing network infrastructure and is an excellent technique for protecting the Internet of Things, IPv6 over Low Power Wireless Personal Area Networks, and high value globally routable IPv6 interfaces. This is done by crypto-graphically altering IPv6 network addresses every few seconds in a synchronous manner at all endpoints. A border gateway device can be used to intercept select packets to unobtrusively perform this action. Software driven implementations have posed many challenges, namely, constant code maintenance to remain compliant with all library and kernel dependencies, the need for a host computing platform, and less than optimal throughput. This work seeks to overcome these challenges in a lightweight system to be developed for practical wide deployment.
Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.
The blockchain technology has emerged as an attractive solution to address performance and security issues in distributed systems. Blockchain's public and distributed peer-to-peer ledger capability benefits cloud computing services which require functions such as, assured data provenance, auditing, management of digital assets, and distributed consensus. Blockchain's underlying consensus mechanism allows to build a tamper-proof environment, where transactions on any digital assets are verified by set of authentic participants or miners. With use of strong cryptographic methods, blocks of transactions are chained together to enable immutability on the records. However, achieving consensus demands computational power from the miners in exchange of handsome reward. Therefore, greedy miners always try to exploit the system by augmenting their mining power. In this paper, we first discuss blockchain's capability in providing assured data provenance in cloud and present vulnerabilities in blockchain cloud. We model the block withholding (BWH) attack in a blockchain cloud considering distinct pool reward mechanisms. BWH attack provides rogue miner ample resources in the blockchain cloud for disrupting honest miners' mining efforts, which was verified through simulations.
Incentive-driven advanced attacks have become a major concern to cyber-security. Traditional defense techniques that adopt a passive and static approach by assuming a fixed attack type are insufficient in the face of highly adaptive and stealthy attacks. In particular, a passive defense approach often creates information asymmetry where the attacker knows more about the defender. To this end, moving target defense (MTD) has emerged as a promising way to reverse this information asymmetry. The main idea of MTD is to (continuously) change certain aspects of the system under control to increase the attacker's uncertainty, which in turn increases attack cost/complexity and reduces the chance of a successful exploit in a given amount of time. In this paper, we go one step beyond and show that MTD can be further improved when combined with information disclosure. In particular, we consider that the defender adopts a MTD strategy to protect a critical resource across a network of nodes, and propose a Bayesian Stackelberg game model with the defender as the leader and the attacker as the follower. After fully characterizing the defender's optimal migration strategies, we show that the defender can design a signaling scheme to exploit the uncertainty created by MTD to further affect the attacker's behavior for its own advantage. We obtain conditions under which signaling is useful, and show that strategic information disclosure can be a promising way to further reverse the information asymmetry and achieve more efficient active defense.
Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large-scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they've been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective. In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several-week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample's behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use.
The work proposes and justifies a processing algorithm of computer security incidents based on the author's signatures of cyberattacks. Attention is also paid to the design pattern SOPKA based on the Russian ViPNet technology. Recommendations are made regarding the establishment of the corporate segment SOPKA, which meets the requirements of Presidential Decree of January 15, 2013 number 31c “On the establishment of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” and “Concept of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” approved by the President of the Russian Federation on December 12, 2014, No K 1274.
In this paper a novel data hiding method has been proposed which is based on Non-Linear Feedback Shift Register and Tinkerbell 2D chaotic map. So far, the major work in Steganography using chaotic map has been confined to image steganography where significant restrictions are there to increase payload. In our work, 2D chaotic map and NLFSR are used to developed a video steganography mechanism where data will be embedded in the segregated frames. This will increase the data hiding limit exponentially. Also, embedding position of each frame will be different from others frames which will increase the overall security of the proposed mechanism. We have achieved this randomized data hiding points by using a chaotic map. Basically, Chaotic theory which is non-linear dynamics physics is using in this era in the field of Cryptography and Steganography and because of this theory, little bit changes in initial condition makes the output totally different. So, it is very hard to get embedding position of data without knowing the initial value of the chaotic map.
Usually, the air gap will appear inside the composite insulators and it will lead to serious accident. In order to detect these internal defects in composite insulators operated in the transmission lines, a new non-destructive technique has been proposed. In the study, the mathematical analysis model of the composite insulators inner defects, which is about heat diffusion, has been build. The model helps to analyze the propagation process of heat loss and judge the structure and defects under the surface. Compared with traditional detection methods and other non-destructive techniques, the technique mentioned above has many advantages. In the study, air defects of composite insulators have been made artificially. Firstly, the artificially fabricated samples are tested by flash thermography, and this method shows a good performance to figure out the structure or defects under the surface. Compared the effect of different excitation between flash and hair drier, the artificially samples have a better performance after heating by flash. So the flash excitation is better. After testing by different pollution on the surface, it can be concluded that different pollution don't have much influence on figuring out the structure or defect under the surface, only have some influence on heat diffusion. Then the defective composite insulators from work site are detected and the image of defect is clear. This new active thermography system can be detected quickly, efficiently and accurately, ignoring the influence of different pollution and other environmental restrictions. So it will have a broad prospect of figuring out the defeats and structure in composite insulators even other styles of insulators.
Performing large-scale malware classification is increasingly becoming a critical step in malware analytics as the number and variety of malware samples is rapidly growing. Statistical machine learning constitutes an appealing method to cope with this increase as it can use mathematical tools to extract information out of large-scale datasets and produce interpretable models. This has motivated a surge of scientific work in developing machine learning methods for detection and classification of malicious executables. However, an optimal method for extracting the most informative features for different malware families, with the final goal of malware classification, is yet to be found. Fortunately, neural networks have evolved to the state that they can surpass the limitations of other methods in terms of hierarchical feature extraction. Consequently, neural networks can now offer superior classification accuracy in many domains such as computer vision and natural language processing. In this paper, we transfer the performance improvements achieved in the area of neural networks to model the execution sequences of disassembled malicious binaries. We implement a neural network that consists of convolutional and feedforward neural constructs. This architecture embodies a hierarchical feature extraction approach that combines convolution of n-grams of instructions with plain vectorization of features derived from the headers of the Portable Executable (PE) files. Our evaluation results demonstrate that our approach outperforms baseline methods, such as simple Feedforward Neural Networks and Support Vector Machines, as we achieve 93% on precision and recall, even in case of obfuscations in the data.