Biblio
The Network Intrusion Detection Systems (NIDS) are either signature based or anomaly based. In this paper presented NIDS system belongs to anomaly based Neural Network Intrusion Detection System (NNIDS). The proposed NNIDS is able to successfully recognize learned malicious activities in a network environment. It was tested for the SYN flood attack, UDP flood attack, nMap scanning attack, and also for non-malicious communication.
Although many digital signature algorithms are available nowadays, the speed of signing and/or verifying a digital signature is crucial for different applications. Some algorithms are fast for signing but slow for verification, but others are the inverse. Research efforts for an algorithm being fast in both signing and verification is essential. The traditional GOST algorithm has the shortest signing time but longest verification time compared with other DSA algorithms. Hence an improvement in its signature verification time is sought in this work. A modified GOST digital signature algorithm variant is developed improve the signature verification speed by reducing the computation complexity as well as benefiting from its efficient signing speed. The obtained signature verification execution speed for this variant was 1.5 time faster than that for the original algorithm. Obviously, all parameters' values used, such as public and private key, random numbers, etc. for both signing and verification processes were the same. Hence, this algorithm variant will prove suitable for applications that require short time for both, signing and verification processes. Keywords— Discrete Algorithms, Authentication, Digital Signature Algorithms DSA, GOST, Data Integrity
A nonrepudiation protocol from a sender S to a set of potential receivers \R1, R2, ..., Rn\ performs two functions. First, this protocol enables S to send to every potential receiver Ri a copy of file F along with a proof that can convince an unbiased judge that F was indeed sent by S to Ri. Second, this protocol also enables each Ri to receive from S a copy of file F and to send back to S a proof that can convince an unbiased judge that F was indeed received by Ri from S. When a nonrepudiation protocol from S to \R1, R2, ..., Rn\ is implemented in a cloud system, the communications between S and the set of potential receivers \R1, R2, ..., Rn\ are not carried out directly. Rather, these communications are carried out through a cloud C. In this paper, we present a nonrepudiation protocol that is implemented in a cloud system and show that this protocol is correct. We also show that this protocol has two clear advantages over nonrepudiation protocols that are not implemented in cloud systems.
A nonrepudiation protocol from party S to party R performs two tasks. First, the protocol enables party S to send to party R some text x along with a proof (that can convince a judge) that x was indeed sent by S. Second, the protocol enables party R to receive text x from S and to send to S a proof (that can convince a judge) that x was indeed received by R. A nonrepudiation protocol from one party to another is called two-phase iff the two parties execute the protocol as specified until one of the two parties receives its complete proof. Then and only then does this party refrain from sending any message specified by the protocol because these messages only help the other party complete its proof. In this paper, we present methods for specifying and verifying two-phase nonrepudiation protocols.
In cyberspace, a digital signature is a mathematical technique that plays a significant role, especially in validating the authenticity of digital messages, emails, or documents. Furthermore, the digital signature mechanism allows the recipient to trust the authenticity of the received message that is coming from the said sender and that the message was not altered in transit. Moreover, a digital signature provides a solution to the problems of tampering and impersonation in digital communications. In a real-life example, it is equivalent to a handwritten signature or stamp seal, but it offers more security. This paper proposes a scheme to enable users to digitally sign their communications by validating their identity through users’ mobile devices. This is done by utilizing the user’s ambient Wi-Fi-enabled devices. Moreover, the proposed scheme depends on something that a user possesses (i.e., Wi-Fi-enabled devices), and something that is in the user’s environment (i.e., ambient Wi-Fi access points) where the validation process is implemented, in a way that requires no effort from users and removes the "weak link" from the validation process. The proposed scheme was experimentally examined.
In any security system, there are many security issues that are related to either the sender or the receiver of the message. Quantum computing has proven to be a plausible approach to solving many security issues such as eavesdropping, replay attack and man-in-the-middle attack. In the e-voting system, one of these issues has been solved, namely, the integrity of the data (ballot). In this paper, we propose a scheme that solves the problem of repudiation that could occur when the voter denies the value of the ballot either for cheating purposes or for a real change in the value by a third party. By using an entanglement concept between two parties randomly, the person who is going to verify the ballots will create the entangled state and keep it in a database to use it in the future for the purpose of the non-repudiation of any of these two voters.
This research proposes a system for detecting known and unknown Distributed Denial of Service (DDoS) Attacks. The proposed system applies two different intrusion detection approaches anomaly-based distributed artificial neural networks(ANNs) and signature-based approach. The Amazon public cloud was used for running Spark as the fast cluster engine with varying cores of machines. The experiment results achieved the highest detection accuracy and detection rate comparing to signature based or neural networks-based approach.
Collecting and analyzing personal data is important in modern information applications. Though the privacy of data providers should be protected, some adversarial users may behave badly under circumstances where they are not identified. However, the privacy of honest users should not be infringed. Thus, detecting anomalies without revealing normal users-identities is quite important for operating information systems using personal data. Though various methods of statistics and machine learning have been developed for detecting anomalies, it is difficult to know in advance what anomaly will come up. Thus, it would be useful to provide a "general" framework that can employ any anomaly detection method regardless of the type of data and the nature of the abnormality. In this paper, we propose a privacy preserving anomaly detection framework that allows an authority to detect adversarial users while other honest users are kept anonymous. By using cryptographic techniques, group signatures with message-dependent opening (GS-MDO) and public key encryption with non-interactive opening (PKENO), we provide a correspondence table that links a user and data in a secure way, and we can employ any anonymization technique and any anomaly detection method. It is particularly worth noting that no big brother exists, meaning that no single entity can identify users, while bad behaviors are always traceable. We also show the result of implementing our framework. Briefly, the overhead of our framework is on the order of dozens of milliseconds.
Homomorphic signatures can provide a credential of a result which is indeed computed with a given function on a data set by an untrusted third party like a cloud server, when the input data are stored with the signatures beforehand. Boneh and Freeman in EUROCRYPT2011 proposed a homomorphic signature scheme for polynomial functions of any degree, however the scheme is not based on the normal short integer solution (SIS) problems as its security assumption. In this paper, we show a homomorphic signature scheme for quadratic polynomial functions those security assumption is based on the normal SIS problems. Our scheme constructs the signatures of multiplication as tensor products of the original signature vectors of input data so that homomorphism holds. Moreover, security of our scheme is reduced to the hardness of the SIS problems respect to the moduli such that one modulus is the power of the other modulus. We show the reduction by constructing solvers of the SIS problems respect to either of the moduli from any forger of our scheme.
Public key cryptography or asymmetric keys are widely used in the implementation of data security on information and communication systems. The RSA algorithm (Rivest, Shamir, and Adleman) is one of the most popular and widely used public key cryptography because of its less complexity. RSA has two main functions namely the process of encryption and decryption process. Digital Signature Algorithm (DSA) is a digital signature algorithm that serves as the standard of Digital Signature Standard (DSS). DSA is also included in the public key cryptography system. DSA has two main functions of creating digital signatures and checking the validity of digital signatures. In this paper, the authors compare the computational times of RSA and DSA with some bits and choose which bits are better used. Then combine both RSA and DSA algorithms to improve data security. From the simulation results, the authors chose RSA 1024 for the encryption process and added digital signatures using DSA 512, so the messages sent are not only encrypted but also have digital signatures for the data authentication process.
Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
It is notably challenging to design an efficient and secure signature scheme based on error-correcting codes. An approach to build such signature schemes is to derive it from an identification protocol through the Fiat-Shamir transform. All such protocols based on codes must be run several rounds, since each run of the protocol allows a cheating probability of either 2/3 or 1/2. The resulting signature size is proportional to the number of rounds, thus making the 1/2 cheating probability version more attractive. We present a signature scheme based on double circulant codes in the rank metric, derived from an identification protocol with cheating probability of 2/3. We reduced this probability to almost 1/2 to obtain the smallest signature among code-based signature schemes based on the Fiat-Shamir paradigm, around 22 KBytes for 128 bit security level. Furthermore, among all code-based signature schemes, our proposal has the lowest value of signature plus public key size, and the smallest secret and public key sizes. We provide a security proof in the Random Oracle Model, implementation performances, and a comparison with the parameters of similar signature schemes.
Cryptography and encryption is a topic that is blurred by its complexity making it difficult for the majority of the public to easily grasp. The focus of our research is based on SSL technology involving CAs, a centralized system that manages and issues certificates to web servers and computers for validation of identity. We first explain how the certificate provides a secure connection creating a trust between two parties looking to communicate with one another over the internet. Then the paper goes into what happens when trust is compromised and how information that is being transmitted could possibly go into the hands of the wrong person. We are proposing a browser plugin, Certificate Authority Rescue Engine (CAre), to serve as an added source of security with simplicity and visibility. In order to see why CAre will be an added benefit to average and technical users of the internet, one must understand what website security entails. Therefore, this paper will dive deep into website security through the use of public key infrastructure and its core components; certificates, certificate authorities, and their relationship with web browsers.
The current state of the internet relies heavily on SSL/TLS and the certificate authority model. This model has systematic problems, both in its design as well as its implementation. There are problems with certificate revocation, certificate authority governance, breaches, poor security practices, single points of failure and with root stores. This paper begins with a general introduction to SSL/TLS and a description of the role of certificates, certificate authorities and root stores in the current model. This paper will then explore problems with the current model and describe work being done to help mitigate these problems.
The signcryption technique was first proposed by Y. Zheng, where two cryptographic operations digital signature and message encryption are made combinedly. We cryptanalyze the technique and observe that the signature and encryption become vulnerable if the forged public keys are used. This paper proposes an improvement using modified DSS (Digital Signature Standard) version of ElGamal signature and DHP (Diffie-Hellman key exchange protocol), and shows that the vulnerabilities in both the signature and encryption methods used in Zheng's signcryption are circumvented. DHP is used for session symmetric key establishment and it is combined with the signature in such a way that the vulnerabilities of DHP can be avoided. The security and performance analysis of our signcryption technique are provided and found that our scheme is secure and designed using minimum possible operations with comparable computation cost of Zheng's scheme.
Botnets are considered one of the most dangerous species of network-based attack today because they involve the use of very large coordinated groups of hosts simultaneously. The behavioral analysis of computer networks is at the basis of the modern botnet detection methods, in order to intercept traffic generated by malwares for which signatures do not exist yet. Defining a pattern of features to be placed at the basis of behavioral analysis, puts the emphasis on the quantity and quality of information to be caught and used to mark data streams as normal or abnormal. The problem is even more evident if we consider extensive computer networks or clouds. With the present paper we intend to show how heuristics applied to large-scale proxy logs, considering a typical phase of the life cycle of botnets such as the search for C&C Servers through AGDs (Algorithmically Generated Domains), may provide effective and extremely rapid results. The present work will introduce some novel paradigms. The first is that some of the elements of the supply chain of botnets could be completed without any interaction with the Internet, mostly in presence of wide computer networks and/or clouds. The second is that behind a large number of workstations there are usually "human beings" and it is unlikely that their behaviors will cause marked changes in the interaction with the Internet in a fairly narrow time frame. Finally, AGDs can highlight, at the moment, common lexical features, detectable quickly and without using any black/white list.