Biblio
In the last couple of years, the move to cyberspace provides a fertile environment for ransomware criminals like ever before. Notably, since the introduction of WannaCry, numerous ransomware detection solution has been proposed. However, the ransomware incidence report shows that most organizations impacted by ransomware are running state of the art ransomware detection tools. Hence, an alternative solution is an urgent requirement as the existing detection models are not sufficient to spot emerging ransomware treat. With this motivation, our work proposes "DeepGuard," a novel concept of modeling user behavior for ransomware detection. The main idea is to log the file-interaction pattern of typical user activity and pass it through deep generative autoencoder architecture to recreate the input. With sufficient training data, the model can learn how to reconstruct typical user activity (or input) with minimal reconstruction error. Hence, by applying the three-sigma limit rule on the model's output, DeepGuard can distinguish the ransomware activity from the user activity. The experiment result shows that DeepGuard effectively detects a variant class of ransomware with minimal false-positive rates. Overall, modeling the attack detection with user-behavior permits the proposed strategy to have deep visibility of various ransomware families.
This Research Work in Progress paper presents a study on improving student learning performance in a virtual hands-on lab system in cybersecurity education. As the demand for cybersecurity-trained professionals rapidly increasing, virtual hands-on lab systems have been introduced into cybersecurity education as a tool to enhance students' learning. To improve learning in a virtual hands-on lab system, instructors need to understand: what learning activities are associated with students' learning performance in this system? What relationship exists between different learning activities? What instructors can do to improve learning outcomes in this system? However, few of these questions has been studied for using virtual hands-on lab in cybersecurity education. In this research, we present our recent findings by identifying that two learning activities are positively associated with students' learning performance. Notably, the learning activity of reading lab materials (p \textbackslashtextless; 0:01) plays a more significant role in hands-on learning than the learning activity of working on lab tasks (p \textbackslashtextless; 0:05) in cybersecurity education.In addition, a student, who spends longer time on reading lab materials, may work longer time on lab tasks (p \textbackslashtextless; 0:01).
Today, there are several applications which allow us to share images over the internet. All these images must be stored in a secure manner and should be accessible only to the intended recipients. Hence it is of utmost importance to develop efficient and fast algorithms for encryption of images. This paper uses chaotic generators to generate random sequences which can be used as keys for image encryption. These sequences are seemingly random and have statistical properties. This makes them resistant to analysis and correlation attacks. However, these sequences have fixed cycle lengths. This restricts the number of sequences that can be used as keys. This paper utilises neural networks as a source of perturbation in a chaotic generator and uses its output to encrypt an image. The robustness of the encryption algorithm can be verified using NPCR, UACI, correlation coefficient analysis and information entropy analysis.
This research proposes an inspection on Trust Based Routing protocols to protect Internet of Things directing to authorize dependability and privacy amid to direction-finding procedure in inaccessible systems. There are number of Internet of Things (IOT) gadgets are interrelated all inclusive, the main issue is the means by which to protect the routing of information in the important systems from different types of stabbings. Clients won't feel secure on the off chance that they know their private evidence could without much of a stretch be gotten to and traded off by unapproved people or machines over the system. Trust is an imperative part of Internet of Things (IOT). It empowers elements to adapt to vulnerability and roughness caused by the through and through freedom of other devices. In Mobile Ad-hoc Network (MANET) host moves frequently in any bearing, so that the topology of the network also changes frequently. No specific algorithm is used for routing the packets. Packets/data must be routed by intermediate nodes. It is procumbent to different occurrences ease. There are various approaches to compute trust for a node such as fuzzy trust approach, trust administration approach, hybrid approach, etc. Adaptive Information Dissemination (AID) is a mechanism which ensures the packets in a specific transmission and it analysis of is there any attacks by hackers.It encompasses of ensuring the packet count and route detection between source and destination with trusted path.Trust estimation dependent on the specific condition or setting of a hub, by sharing the setting information onto alternate hubs in the framework would give a superior answer for this issue.Here we present a survey on various trust organization approaches in MANETs. We bring out instantaneous of these approaches for establishing trust of the partaking hubs in a dynamic and unverifiable MANET atmosphere.
Emails are the fundamental unit of web applications. There is an exponential growth in sending and receiving emails online. However, spam mail has turned into an intense issue in email correspondence condition. There are number of substance based channel systems accessible to be specific content based filter(CBF), picture based sifting and many other systems to channel spam messages. The existing technological solution consists of a combination of porter stemer algorithm(PSA) and k means clustering which is adaptive in nature. These procedures are more expensive in regard of the calculation and system assets as they required the examination of entire spam message and calculation of the entire substance of the server. These are the channels must additionally not powerful in nature life on the grounds that the idea of spam block mail and spamming changes much of the time. We propose a starting point based spam mail-sifting system benefit, which works considering top head notcher data of the mail message paying little respect to the body substance of the mail. It streamlines the system and server execution by increasing the precision, recall and accuracy than the existing methods. To design an effective and efficient of autonomous and efficient spam detection system to improve network performance from unknown privileged user attacks.
In order to examine malicious activity that occurs in a network or a system, intrusion detection system is used. Intrusion Detection is software or a device that scans a system or a network for a distrustful activity. Due to the growing connectivity between computers, intrusion detection becomes vital to perform network security. Various machine learning techniques and statistical methodologies have been used to build different types of Intrusion Detection Systems to protect the networks. Performance of an Intrusion Detection is mainly depends on accuracy. Accuracy for Intrusion detection must be enhanced to reduce false alarms and to increase the detection rate. In order to improve the performance, different techniques have been used in recent works. Analyzing huge network traffic data is the main work of intrusion detection system. A well-organized classification methodology is required to overcome this issue. This issue is taken in proposed approach. Machine learning techniques like Support Vector Machine (SVM) and Naïve Bayes are applied. These techniques are well-known to solve the classification problems. For evaluation of intrusion detection system, NSL- KDD knowledge discovery Dataset is taken. The outcomes show that SVM works better than Naïve Bayes. To perform comparative analysis, effective classification methods like Support Vector Machine and Naive Bayes are taken, their accuracy and misclassification rate get calculated.
The objective of the Honeypot security system is a mechanism to identify the unauthorized users and intruders in the network. The enterprise level security can be possible via high scalability. The whole theme behind this research is an Intrusion Detection System and Intrusion Prevention system factors accomplished through honeypot and honey trap methodology. Dynamic Configuration of honey pot is the milestone for this mechanism. Eight different methodologies were deployed to catch the Intruders who utilizing the unsecured network through the unused IP address. The method adapted here to identify and trap through honeypot mechanism activity. The result obtained is, intruders find difficulty in gaining information from the network, which helps a lot of the industries. Honeypot can utilize the real OS and partially through high interaction and low interaction respectively. The research work concludes the network activity and traffic can also be tracked through honeypot. This provides added security to the secured network. Detection, prevention and response are the categories available, and moreover, it detects and confuses the hackers.
With the development of Internet technology, software vulnerabilities have become a major threat to current computer security. In this work, we propose the vulnerability detection for source code using Contextual LSTM. Compared with CNN and LSTM, we evaluated the CLSTM on 23185 programs, which are collected from SARD. We extracted the features through the program slicing. Based on the features, we used the natural language processing to analysis programs with source code. The experimental results demonstrate that CLSTM has the best performance for vulnerability detection, reaching the accuracy of 96.711% and the F1 score of 0.96984.
The extremely rapid development of the Internet of Things brings growing attention to the information security issue. Realization of cryptographically strong pseudo random number generators (PRNGs), is crucial in securing sensitive data. They play an important role in cryptography and in network security applications. In this paper, we realize a comparative study of two pseudo chaotic number generators (PCNGs). The First pseudo chaotic number generator (PCNG1) is based on two nonlinear recursive filters of order one using a Skew Tent map (STmap) and a Piece-Wise Linear Chaotic map (PWLCmap) as non linear functions. The second pseudo chaotic number generator (PCNG2) consists of four coupled chaotic maps, namely: PWLCmaps, STmap, Logistic map by means a binary diffusion matrix [D]. A comparative analysis of the performance in terms of computation time (Generation time, Bit rate and Number of needed cycles to generate one byte) and security of the two PCNGs is carried out.
CAPTCHA is a type of challenge-response test to ensure that the response is only generated by humans and not by computerized robots. CAPTCHA are getting harder as because usage of latest advanced pattern recognition and machine learning algorithms are capable of solving simpler CAPTCHA. However, some enhancement procedures make the CAPTCHAs too difficult to be recognized by the human. This paper resolves the problem by next generation human-friendly mini game-CAPTCHA for quantifying the usability of CAPTCHAs.
With the growing observed success of big data use, many challenges appeared. Timeless, scalability and privacy are the main problems that researchers attempt to figure out. Privacy preserving is now a highly active domain of research, many works and concepts had seen the light within this theme. One of these concepts is the de-identification techniques. De-identification is a specific area that consists of finding and removing sensitive information either by replacing it, encrypting it or adding a noise to it using several techniques such as cryptography and data mining. In this report, we present a new model of de-identification of textual data using a specific Immune System algorithm known as CLONALG.
Wireless sensor networks offer benefits in several applications but are vulnerable to various security threats, such as eavesdropping and hardware tampering. In order to reach secure communications among nodes, many approaches employ symmetric encryption. Several key management schemes have been proposed in order to establish symmetric keys. The paper presents an innovative key management scheme called random seed distribution with transitory master key, which adopts the random distribution of secret material and a transitory master key used to generate pairwise keys. The proposed approach addresses the main drawbacks of the previous approaches based on these techniques. Moreover, it overperforms the state-of-the-art protocols by providing always a high security level.
In order to deal with shortcomings of security management systems, this work proposes a methodology based on agents paradigm for cybersecurity risk management. In this approach a system is decomposed in agents that may be used to attain goals established by attackers. Threats to business are achieved by attacker's goals in service and deployment agents. To support a proactive behavior, sensors linked to security mechanisms are analyzed accordingly with a model for Situational Awareness(SA)[4].
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
Single sign-on (SSO) is an identity management technique that provides users the ability to use multiple Web services with one set of credentials. However, when the authentication server is down or unavailable, users cannot access Web services, even if the services are operating normally. Therefore, enabling continuous use is important in single sign on. In this paper, we present security framework to overcome credential problems of accessing multiple web application. We explain system functionality with authorization and Authentication. We consider these methods from the viewpoint of continuity, security and efficiency makes the framework highly secure.
In this article, researcher collaboration patterns and research topics on Intelligence and Security Informatics (ISI) are investigated using social network analysis approaches. The collaboration networks exhibit scale-free property and small-world effect. From these networks, the authors obtain the key researchers, institutions, and three important topics.
An intrusion detection system (IDS) inspects all inbound and outbound network activity and identifies suspicious patterns that may indicate a network or system attack from someone attempting to break into or compromise a system. A networkbased system, or NIDS, the individual packets flowing through a network are analyzed. In a host-based system, the IDS examines at the activity on each individual computer or host. IDS techniques are divided into two categories including misuse detection and anomaly detection. In recently years, Mobile Agent based technology has been used for distributed systems with having characteristic of mobility and autonomy. In this working we aimed to combine IDS with Mobile Agent concept for more scale, effective, knowledgeable system.