Biblio
This computer era leads human to interact with computers and networks but there is no such solution to get rid of security problems. Securities threats misleads internet, we are sometimes losing our hope and reliability with many server based access. Even though many more crypto algorithms are coming for integrity and authentic data in computer access still there is a non reliable threat penetrates inconsistent vulnerabilities in networks. These vulnerable sites are taking control over the user's computer and doing harmful actions without user's privileges. Though Firewalls and protocols may support our browsers via setting certain rules, still our system couldn't support for data reliability and confidentiality. Since these problems are based on network access, lets we consider TCP/IP parameters as a dataset for analysis. By doing preprocess of TCP/IP packets we can build sovereign model on data set and clump cluster. Further the data set gets classified into regular traffic pattern and anonymous pattern using KNN classification algorithm. Based on obtained pattern for normal and threats data sets, security devices and system will set rules and guidelines to learn by it to take needed stroke. This paper analysis the computer to learn security actions from the given data sets which already exist in the previous happens.
With the growth of smartphone sales and app usage, fingerprinting and identification of smartphone apps have become a considerable threat to user security and privacy. Traffic analysis is one of the most common methods for identifying apps. Traditional countermeasures towards traffic analysis includes traffic morphing and multipath routing. The basic idea of multipath routing is to increase the difficulty for adversary to eavesdrop all traffic by splitting traffic into several subflows and transmitting them through different routes. Previous works in multipath routing mainly focus on Wireless Sensor Networks (WSNs) or Mobile Ad Hoc Networks (MANETs). In this paper, we propose a multipath routing scheme for smartphones with edge network assistance to mitigate traffic analysis attack. We consider an adversary with limited capability, that is, he can only intercept the traffic of one node following certain attack probability, and try to minimize the traffic an adversary can intercept. We formulate our design as a flow routing optimization problem. Then a heuristic algorithm is proposed to solve the problem. Finally, we present the simulation results for our scheme and justify that our scheme can effectively protect smartphones from traffic analysis attack.
Supervisory Control and Data Acquisition (SCADA) systems play a critical role in the operation of large-scale distributed industrial systems. There are many vulnerabilities in SCADA systems and inadvertent events or malicious attacks from outside as well as inside could lead to catastrophic consequences. Network-based intrusion detection is a preferred approach to provide security analysis for SCADA systems due to its less intrusive nature. Data in SCADA network traffic can be generally divided into transport, operation, and content levels. Most existing solutions only focus on monitoring and event detection of one or two levels of data, which is not enough to detect and reason about attacks in all three levels. In this paper, we develop a novel edge-based multi-level anomaly detection framework for SCADA networks named EDMAND. EDMAND monitors all three levels of network traffic data and applies appropriate anomaly detection methods based on the distinct characteristics of data. Alerts are generated, aggregated, prioritized before sent back to control centers. A prototype of the framework is built to evaluate the detection ability and time overhead of it.
An important source of cyber-attacks is malware, which proliferates in different forms such as botnets. The botnet malware typically looks for vulnerable devices across the Internet, rather than targeting specific individuals, companies or industries. It attempts to infect as many connected devices as possible, using their resources for automated tasks that may cause significant economic and social harm while being hidden to the user and device. Thus, it becomes very difficult to detect such activity. A considerable amount of research has been conducted to detect and prevent botnet infestation. In this paper, we attempt to create a foundation for an anomaly-based intrusion detection system using a statistical learning method to improve network security and reduce human involvement in botnet detection. We focus on identifying the best features to detect botnet activity within network traffic using a lightweight logistic regression model. The network traffic is processed by Bro, a popular network monitoring framework which provides aggregate statistics about the packets exchanged between a source and destination over a certain time interval. These statistics serve as features to a logistic regression model responsible for classifying malicious and benign traffic. Our model is easy to implement and simple to interpret. We characterized and modeled 8 different botnet families separately and as a mixed dataset. Finally, we measured the performance of our model on multiple parameters using F1 score, accuracy and Area Under Curve (AUC).
The paper presents a new technique for the botnets' detection in the corporate area networks. It is based on the usage of the algorithms of the artificial immune systems. Proposed approach is able to distinguish benign network traffic from malicious one using the clonal selection algorithm taking into account the features of the botnet's presence in the network. An approach present the main improvements of the BotGRABBER system. It is able to detect the IRC, HTTP, DNS and P2P botnets.
The IRC botnet is the earliest and most significant botnet group that has a significant impact. Its characteristic is to control multiple zombies hosts through the IRC protocol and constructing command control channels. Relevant research analyzes the large amount of network traffic generated by command interaction between the botnet client and the C&C server. Packet capture traffic monitoring on the network is currently a more effective detection method, but this information does not reflect the essential characteristics of the IRC botnet. The increase in the amount of erroneous judgments has often occurred. To identify whether the botnet control server is a homogenous botnet, dynamic network communication characteristic curves are extracted. For unequal time series, dynamic time warping distance clustering is used to identify the homologous botnets by category, and in order to improve detection. Speed, experiments will use SAX to reduce the dimension of the extracted curve, reducing the time cost without reducing the accuracy.
Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.
Software Defined Network (SDN) architecture is a new and novel way of network management mechanism. In SDN, switches do not process the incoming packets like conventional network computing environment. They match for the incoming packets in the forwarding tables and if there is none it will be sent to the controller for processing which is the operating system of the SDN. A Distributed Denial of Service (DDoS) attack is a biggest threat to cyber security in SDN network. The attack will occur at the network layer or the application layer of the compromised systems that are connected to the network. In this paper a machine learning based intelligent method is proposed which can detect the incoming packets as infected or not. The different machine learning algorithms adopted for accomplishing the task are Naive Bayes, K-Nearest neighbor (KNN) and Support vector machine (SVM) to detect the anomalous behavior of the data traffic. These three algorithms are compared according to their performances and KNN is found to be the suitable one over other two. The performance measure is taken here is the detection rate of infected packets.
Anonymity networks provide privacy to the users by relaying their data to multiple destinations in order to reach the final destination anonymously. Multilayer of encryption is used to protect the users' privacy from attacks or even from the operators of the stations. In this research, we showed how flow analysis could be used to identify encrypted anonymity network traffic under four scenarios: (i) Identifying anonymity networks compared to normal background traffic; (ii) Identifying the type of applications used on the anonymity networks; (iii) Identifying traffic flow behaviors of the anonymity network users; and (iv) Identifying / profiling the users on an anonymity network based on the traffic flow behavior. In order to study these, we employ a machine learning based flow analysis approach and explore how far we can push such an approach.
Internet of things has become a subject of interest across a different industry domain. It includes 6LoWPAN (Low-Power Wireless Personal Area Network) which is used for a variety of application including home automation, sensor networks, manufacturing and industry application etc. However, gathering such a huge amount of data from such a different domain causes a problem of traffic congestion, high reliability, high energy efficiency etc. In order to address such problems, content based routing (CBR) technique is proposed, where routing paths are decided according to the type of content. By routing the correlated data to hop nodes for processing, a higher data aggregation ratio can be obtained, which in turns reducing the traffic congestion and minimizes the energy consumption. CBR is implemented on top of existing RPL (Routing Protocol for Low Power and Lossy network) and implemented in contiki operating system using cooja simulator. The analysis are carried out on the basis average power consumption, packet delivery ratio etc.
This paper focuses on optimizing the sigmoid filter for detecting Low-Rate DoS attacks. Though sigmoid filter could help for detecting the attacker, it could severely affect the network efficiency. Unlike high rate attacks, Low-Rate DoS attacks such as ``Shrew'' and ``New Shrew'' are hard to detect. Attackers choose a malicious low-rate bandwidth to exploit the TCP's congestion control window algorithm and the re-transition timeout mechanism. We simulated the attacker traffic by editing using NS3. The Sigmoid filter was used to create a threshold bandwidth filter at the router that allowed a specific bandwidth, so when traffic that exceeded the threshold occurred, it would be dropped, or it would be redirected to a honey-pot server, instead. We simulated the Sigmoid filter using MATLAB and took the attacker's and legitimate user's traffic generated by NS-3 as the input for the Sigmoid filter in the MATLAB. We run the experiment three times with different threshold values correlated to the TCP packet size. We found the probability to detect the attacker traffic as follows: the first was 25%, the second 50% and the third 60%. However, we observed a drop in legitimate user traffic with the following probabilities, respectively: 75%, 50%, and 85%.
Cloud is the requirement of today's competitive world that demand flexible, agile and adaptable technology to be at par with rapidly changing IT industry. Cloud offers scalable, on-demand, pay-as-you-go services to enterprise and has hence become a part of growing trend of organizations IT service model. With emerging trend of cloud the security concerns have further increased and one of the biggest concerns related to cloud is DDoS attack. DDoS attack tends to exhaust all the available resources and leads to unavailability of services in cloud to legitimate users. In this paper the concept of fog computing is used, it is nothing but an extension to cloud computing that performs analysis at the edge of the network, i.e. bring intelligence at the edge of the network for quick real time decision making and reducing the amount of data that is forwarded to cloud. We have proposed a framework in which DDoS attack traffic is generated using different tools which is made to pass through fog defender to cloud. Furthermore, rules are applied on fog defender to detect and filter DDoS attack traffic targeted to cloud.
One of the biggest problems of today's internet technologies is cyber attacks. In this paper whether DDoS attacks will be determined by deep packet inspection. Initially packets are captured by listening of network traffic. Packet filtering was achieved at desired number and type. These packets are recorded to database to be analyzed, daily values and average values are compared by known attack patterns and will be determined whether a DDoS attack attempts in real time systems.
Nowadays, Internet Service Providers (ISPs) have been depending on Deep Packet Inspection (DPI) approaches, which are the most precise techniques for traffic identification and classification. However, constructing high performance DPI approaches imposes a vigilant and an in-depth computing system design because the demands for the memory and processing power. Membership query data structures, specifically Bloom filter (BF), have been employed as a matching check tool in DPI approaches. It has been utilized to store signatures fingerprint in order to examine the presence of these signatures in the incoming network flow. The main issue that arise when employing Bloom filter in DPI approaches is the need to use k hash functions which, in turn, imposes more calculations overhead that degrade the performance. Consequently, in this paper, a new design and implementation for a DPI approach have been proposed. This DPI utilizes a membership query data structure called Cuckoo filter (CF) as a matching check tool. CF has many advantages over BF like: less memory consumption, less false positive rate, higher insert performance, higher lookup throughput, support delete operation. The achieved experiments show that the proposed approach offers better performance results than others that utilize Bloom filter.
Traffic classification, i.e. associating network traffic to the application that generated it, is an important tool for several tasks, spanning on different fields (security, management, traffic engineering, R&D). This process is challenged by applications that preserve Internet users' privacy by encrypting the communication content, and even more by anonymity tools, additionally hiding the source, the destination, and the nature of the communication. In this paper, leveraging a public dataset released in 2017, we provide (repeatable) classification results with the aim of investigating to what degree the specific anonymity tool (and the traffic it hides) can be identified, when compared to the traffic of the other considered anonymity tools, using machine learning approaches based on the sole statistical features. To this end, four classifiers are trained and tested on the dataset: (i) Naïve Bayes, (ii) Bayesian Network, (iii) C4.5, and (iv) Random Forest. Results show that the three considered anonymity networks (Tor, I2P, JonDonym) can be easily distinguished (with an accuracy of 99.99%), telling even the specific application generating the traffic (with an accuracy of 98.00%).
Live migration is the process used in virtualization environment of datacenters in order to take the benefit of zero downtime during system maintenance. But during migrating live virtual machines along with system files and storage data, network traffic gets increases across network bandwidth and delays in migration time. There is need to reduce the migration time in order to maintain the system performance by analyzing and optimizing the storage overheads which mainly creates due to unnecessary duplicated data transferred during live migration. So there is need of such storage device which will keep the duplicated data residing in both the source as well as target physical host i.e. NAS. The proposed hash map based algorithm maps all I/O operations in order to track the duplicated data by assigning hash value to both NAS and RAM data. Only the unique data then will be sent data to the target host without affecting service level agreement (SLA), without affecting VM migration time, application downtime, SLA violations, VM pre-migration and downtime post migration overheads during pre and post migration of virtual machines.
Modbus over TCP/IP is one of the most popular industrial network protocol that are widely used in critical infrastructures. However, vulnerability of Modbus TCP protocol has attracted widely concern in the public. The traditional intrusion detection methods can identify some intrusion behaviors, but there are still some problems. In this paper, we present an innovative approach, SD-IDS (Stereo Depth IDS), which is designed for perform real-time deep inspection for Modbus TCP traffic. SD-IDS algorithm is composed of two parts: rule extraction and deep inspection. The rule extraction module not only analyzes the characteristics of industrial traffic, but also explores the semantic relationship among the key field in the Modbus TCP protocol. The deep inspection module is based on rule-based anomaly intrusion detection. Furthermore, we use the online test to evaluate the performance of our SD-IDS system. Our approach get a low rate of false positive and false negative.
Botnets are a growing threat to the security of data and services on a global level. They exploit vulnerabilities in networks and host machines to harvest sensitive information, or make use of network resources such as memory or bandwidth in cyber-crime campaigns. Bot programs by nature are largely automated and systematic, and this is often used to detect them. In this paper, we extend upon existing work in this area by proposing a network event correlation method to produce graphs of flows generated by botnets, outlining the implementation and functionality of this approach. We also show how this method can be combined with statistical flow-based analysis to provide a descriptive chain of events, and test on public datasets with an overall success rate of 94.1%.
Mobile Ad-hoc Network (MANET) is a prominent technology in the wireless networking field in which the movables nodes operates in distributed manner and collaborates with each other in order to provide the multi-hop communication between the source and destination nodes. Generally, the main assumption considered in the MANET is that each node is trusted node. However, in the real scenario, there are some unreliable nodes which perform black hole attack in which the misbehaving nodes attract all the traffic towards itself by giving false information of having the minimum path towards the destination with a very high destination sequence number and drops all the data packets. In the paper, we have presented different categories for black hole attack mitigation techniques and also presented the summary of various techniques along with its drawbacks that need to be considered while designing an efficient protocol.
In recent years the use of wireless ad hoc networks has seen an increase of applications. A big part of the research has focused on Mobile Ad Hoc Networks (MAnETs), due to its implementations in vehicular networks, battlefield communications, among others. These peer-to-peer networks usually test novel communications protocols, but leave out the network security part. A wide range of attacks can happen as in wired networks, some of them being more damaging in MANETs. Because of the characteristics of these networks, conventional methods for detection of attack traffic are ineffective. Intrusion Detection Systems (IDSs) are constructed on various detection techniques, but one of the most important is anomaly detection. IDSs based only in past attacks signatures are less effective, even more if these IDSs are centralized. Our work focuses on adding a novel Machine Learning technique to the detection engine, which recognizes attack traffic in an online way (not to store and analyze after), re-writing IDS rules on the fly. Experiments were done using the Dockemu emulation tool with Linux Containers, IPv6 and OLSR as routing protocol, leading to promising results.
The continuous advance in recent cloud-based computer networks has generated a number of security challenges associated with intrusions in network systems. With the exponential increase in the volume of network traffic data, involvement of humans in such detection systems is time consuming and a non-trivial problem. Secondly, network traffic data tends to be highly dimensional, comprising of numerous features and attributes, making classification challenging and thus susceptible to the curse of dimensionality problem. Given such scenarios, the need arises for dimensional reduction, feature selection, combined with machine-learning techniques in the classification of such data. Therefore, as a contribution, this paper seeks to employ data mining techniques in a cloud-based environment, by selecting appropriate attributes and features with the least importance in terms of weight for the classification. Often the standard is to select features with better weights while ignoring those with least weights. In this study, we seek to find out if we can make prediction using those features with least weights. The motivation is that adversaries use stealth to hide their activities from the obvious. The question then is, can we predict any stealth activity of an adversary using the least observed attributes? In this particular study, we employ information gain to select attributes with the lowest weights and then apply machine learning to classify if a combination, in this case, of both source and destination ports are attacked or not. The motivation of this investigation is if attributes that are of least importance can be used to predict if an attack could occur. Our preliminary results show that even when the source and destination port attributes are used in combination with features with the least weights, it is possible to classify such network traffic data and predict if an attack will occur or not.