Biblio
Sensor networks mainly deployed to monitor and report real events, and thus it is very difficult and expensive to achieve event source anonymity for it, as sensor networks are very limited in resources. Data obscurity i.e. the source anonymity problem implies that an unauthorized observer must be unable to detect the origin of events by analyzing the network traffic; this problem has emerged as an important topic in the security of wireless sensor networks. This work inspects the different approaches carried for attaining the source anonymity in wireless sensor network, with variety of techniques based on different adversarial assumptions. The approach meeting the best result in source anonymity is proposed for further improvement in the source location privacy. The paper suggests the implementation of most prominent and effective LSB Steganography technique for the improvement.
The growth of the Internet has made IPv4 addresses a scarce resource. Due to slow IPv6 deployment, IANA-level IPv4 address exhaustion was reached before the world could transition to an IPv6-only Internet. The continuing need for IPv4 reachability will only be supported by IPv4 address sharing. This paper reviews ISP-level address sharing mechanisms, which allow Internet service providers to connect multiple customers who share a single IPv4 address. Some mechanisms come with severe and unpredicted consequences, and all of them come with tradeoffs. We propose a novel classification, which we apply to existing mechanisms such as NAT444 and DS-Lite and proposals such as 4rd, MAP, etc. Our tradeoff analysis reveals insights into many problems including: abuse attribution, performance degradation, address and port usage efficiency, direct intercustomer communication, and availability.
In Wireless sensor networks (WSNs), many tiny sensor nodes communicate using wireless links and collaborate with each other. The data collected by each of the nodes is communicated towards the gateway node after carrying out aggregation of the data by different nodes. It is necessary to secure the data collected by the WSN nodes while they communicate among themselves using multi hop wireless links. To meet this objective it is required to make use of energy efficient cryptographic algorithms so that the same can be ported over the resource constrained nodes. It is needed to create trust initially among the WSN nodes while using any of the cryptographic algorithms. Towards this, a key management technique needs to be made use of. Due to the resource constrained nature of the WSN nodes and the remote deployment of the nodes, an implementation of conventional key management techniques is infeasible. This work proposes a key management technique, with its reduced resource overheads, which is highly suited to be used in hierarchical WSN applications. Both Identity based key management (IBK) and probabilistic key pre-distribution schemes are made use of at different hierarchical levels. The proposed key management technique has been implemented using IRIS WSN nodes. A comparison of resource overheads has also been carried out.
Most of the detection approaches like Signature based, Anomaly based and Specification based are not able to analyze and detect all types of malware. Signature-based approach for malware detection has one major drawback that it cannot detect zero-day attacks. The fundamental limitation of anomaly based approach is its high false alarm rate. And specification-based detection often has difficulty to specify completely and accurately the entire set of valid behaviors a malware should exhibit. Modern malware developers try to avoid detection by using several techniques such as polymorphic, metamorphic and also some of the hiding techniques. In order to overcome these issues, we propose a new approach for malware analysis and detection that consist of the following twelve stages Inbound Scan, Inbound Attack, Spontaneous Attack, Client-Side Exploit, Egg Download, Device Infection, Local Reconnaissance, Network Surveillance, & Communications, Peer Coordination, Attack Preparation, and Malicious Outbound Propagation. These all stages will integrate together as interrelated process in our proposed approach. This approach had solved the limitations of all the three approaches by monitoring the behavioral activity of malware at each any every stage of life cycle and then finally it will give a report of the maliciousness of the files or software's.
Denial of Service (DoS) and Distributed Denial of Service (DDoS) attack, exhausts the resources of server/service and makes it unavailable for legitimate users. With increasing use of online services and attacks on these services, the importance of Intrusion Detection System (IDS) for detection of DoS/DDoS attacks has also grown. Detection accuracy & CPU utilization of Data mining based IDS is directly proportional to the quality of training dataset used to train it. Various preprocessing methods like normalization, discretization, fuzzification are used by researchers to improve the quality of training dataset. This paper evaluates the effect of various data preprocessing methods on the detection accuracy of DoS/DDoS attack detection IDS and proves that numeric to binary preprocessing method performs better compared to other methods. Experimental results obtained using KDD 99 dataset are provided to support the efficiency of proposed combination.
In the era of big data, many users and companies start to move their data to cloud storage to simplify data management and reduce data maintenance cost. However, security and privacy issues become major concerns because third-party cloud service providers are not always trusty. Although data contents can be protected by encryption, the access patterns that contain important information are still exposed to clouds or malicious attackers. In this paper, we apply the ORAM algorithm to enable privacy-preserving access to big data that are deployed in distributed file systems built upon hundreds or thousands of servers in a single or multiple geo-distributed cloud sites. Since the ORAM algorithm would lead to serious access load unbalance among storage servers, we study a data placement problem to achieve a load balanced storage system with improved availability and responsiveness. Due to the NP-hardness of this problem, we propose a low-complexity algorithm that can deal with large-scale problem size with respect to big data. Extensive simulations are conducted to show that our proposed algorithm finds results close to the optimal solution, and significantly outperforms a random data placement algorithm.
Dual Energy CT (DECT) has recently gained significant research interest owing to its ability to discriminate materials, and hence is widely applied in the field of nuclear safety and security inspection. With the current technological developments, DECT can be typically realized by using two sets of detectors, one for detecting lower energy X-rays and another for detecting higher energy X-rays. This makes the imaging system expensive, limiting its practical implementation. In 2009, our group performed a preliminary study on a new low-cost system design, using only a complete data set for lower energy level and a sparse data set for the higher energy level. This could significantly reduce the cost of the system, as it contained much smaller number of detector elements. Reconstruction method is the key point of this system. In the present study, we further validated this system and proposed a robust method, involving three main steps: (1) estimation of the missing data iteratively with TV constraints; (2) use the reconstruction from the complete lower energy CT data set to form an initial estimation of the projection data for higher energy level; (3) use ordered views to accelerate the computation. Numerical simulations with different number of detector elements have also been examined. The results obtained in this study demonstrate that 1 + 14% CT data is sufficient enough to provide a rather good reconstruction of both the effective atomic number and electron density distributions of the scanned object, instead of 2 sets CT data.
This paper addresses the minimum transmission broadcast (MTB) problem for the many-to-all scenario in wireless multihop networks and presents a network-coding broadcast protocol with priority-based deadlock prevention. Our main contributions are as follows: First, we relate the many-to-all-with-network-coding MTB problem to a maximum out-degree problem. The solution of the latter can serve as a lower bound for the number of transmissions. Second, we propose a distributed network-coding broadcast protocol, which constructs efficient broadcast trees and dictates nodes to transmit packets in a network coding manner. Besides, we present the priority-based deadlock prevention mechanism to avoid deadlocks. Simulation results confirm that compared with existing protocols in the literature and the performance bound we present, our proposed network-coding broadcast protocol performs very well in terms of the number of transmissions.
An intrusion detection system (IDS) inspects all inbound and outbound network activity and identifies suspicious patterns that may indicate a network or system attack from someone attempting to break into or compromise a system. A networkbased system, or NIDS, the individual packets flowing through a network are analyzed. In a host-based system, the IDS examines at the activity on each individual computer or host. IDS techniques are divided into two categories including misuse detection and anomaly detection. In recently years, Mobile Agent based technology has been used for distributed systems with having characteristic of mobility and autonomy. In this working we aimed to combine IDS with Mobile Agent concept for more scale, effective, knowledgeable system.
Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
The deployment and operation of global network architectures can exhibit complex, dynamic behavior and the comprehensive validation of their properties, without actually building and running the systems, can only be achieved with the help of simulations. Packet-level models are not feasible in the Internet scale, but we are still interested in the phenomena that emerge when the systems are run in their intended environment. We argue for the high-level simulation methodology and introduce a simulation environment based on aggregate models built on state-of-the-art datasets available while respecting invariants observed in measurements. The models developed are aimed at studying a clean slate name-based interdomain routing architecture and provide an abundance of parameters for sensitivity analysis and a modular design with a balanced level of detail in different aspects of the model. In addition to introducing several reusable models for traffic, topology, and deployment, we report our experiences in using the high-level simulation approach and potential pitfalls related to it.
In this work we design and develop Montage for real-time multi-user formation tracking and localization by off-the-shelf smartphones. Montage achieves submeter-level tracking accuracy by integrating temporal and spatial constraints from user movement vector estimation and distance measuring. In Montage we designed a suite of novel techniques to surmount a variety of challenges in real-time tracking, without infrastructure and fingerprints, and without any a priori user-specific (e.g., stride-length and phone-placement) or site-specific (e.g., digitalized map) knowledge. We implemented, deployed and evaluated Montage in both outdoor and indoor environment. Our experimental results (847 traces from 15 users) show that the stride-length estimated by Montage over all users has error within 9cm, and the moving-direction estimated by Montage is within 20°. For realtime tracking, Montage provides meter-second-level formation tracking accuracy with off-the-shelf mobile phones.
As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
Cloud computing is an emerging paradigm shifting the shape of computing models from being a technology to a utility. However, security, privacy and trust are amongst the issues that can subvert the benefits and hence wide deployment of cloud computing. With the introduction of omnipresent mobile-based clients, the ubiquity of the model increases, suggesting a still higher integration in life. Nonetheless, the security issues rise to a higher degree as well. The constrained input methods for credentials and the vulnerable wireless communication links are among factors giving rise to serious security issues. To strengthen the access control of cloud resources, organizations now commonly acquire Identity Management Systems (IdM). This paper presents that the most popular IdM, namely OAuth, working in scope of Mobile Cloud Computing has many weaknesses in authorization architecture. In particular, authors find two major issues in current IdM. First, if the IdM System is compromised through malicious code, it allows a hacker to get authorization of all the protected resources hosted on a cloud. Second, all the communication links among client, cloud and IdM carries complete authorization token, that can allow hacker, through traffic interception at any communication link, an illegitimate access of protected resources. We also suggest a solution to the reported problems, and justify our arguments with experimentation and mathematical modeling.
As the ubiquity of smartphones increases we see an increase in the popularity of location based services. Specifically, online social networks provide services such as alerting the user of friend co-location, and finding a user's k nearest neighbors. Location information is sensitive, which makes privacy a strong concern for location based systems like these. We have built one such service that allows two parties to share location information privately and securely. Our system allows every user to maintain and enforce their own policy. When one party, (Alice), queries the location of another party, (Bob), our system uses homomorphic encryption to test if Alice is within Bob's policy. If she is, Bob's location is shared with Alice only. If she is not, no user location information is shared with anyone. Due to the importance and sensitivity of location information, and the easily deployable design of our system, we offer a useful, practical, and important system to users. Our main contribution is a flexible, practical protocol for private proximity testing, a useful and efficient technique for representing location values, and a working implementation of the system we design in this paper. It is implemented as an Android application with the Facebook online social network used for communication between users.
Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time.
IT industry loses tens of billions of dollars annually from security attacks such as tampering and malicious reverse engineering. Code obfuscation techniques counter such attacks by transforming code into patterns that resist the attacks. None of the current code obfuscation techniques satisfy all the obfuscation effectiveness criteria such as resistance to reverse engineering attacks and state space increase. To address this, we introduce new code patterns that we call nontrivial code clones and propose a new obfuscation scheme that combines nontrivial clones with existing obfuscation techniques to satisfy all the effectiveness criteria. The nontrivial code clones need to be constructed manually, thus adding to the development cost. This cost can be limited by cloning only the code fragments that need protection and by reusing the clones across projects. This makes it worthwhile considering the security risks. In this paper, we present our scheme and illustrate it with a toy example.
In order to strengthen network security and improve the network's active defense intrusion detection capabilities, this paper presented and established one active defense intrusion detection system which based on the mixed interactive honeypot. The system can help to reduce the false information, enhance the stability and security of the network. Testing and simulation experiments show that: the system improved active defense of the network's security, increase the honeypot decoy capability and strengthen the attack predictive ability. So it has better application and promotion value.
For wireless sensor networks deployed to monitor and report real events, event source-location privacy (SLP) is a critical security property. Previous work has proposed schemes based on fake packet injection such as FitProbRate and TFS, to realize event source anonymity for sensor networks under a challenging attack model where a global attacker is able to monitor the traffic in the entire network. Although these schemes can well protect the SLP, there exists imbalance in traffic or delay. In this paper, we propose an Optimal-cluster-based Source Anonymity Protocol (OSAP), which can achieve a tradeoff between network traffic and real event report latency through adjusting the transmission rate and the radius of unequal clusters, to reduce the network traffic. The simulation results demonstrate that OSAP can significantly reduce the network traffic and the delay meets the system requirement.
In adaptive processing applications, the design of the adaptive filter requires estimation of the unknown interference-plus-noise covariance matrix from secondary training data. The presence of outliers in the training data can severely degrade the performance of adaptive processing. By exploiting the sparse prior of the outliers, a Bayesian framework to develop a computationally efficient outlier-resistant adaptive filter based on sparse Bayesian learning (SBL) is proposed. The expectation-maximisation (EM) algorithm is used therein to obtain a maximum a posteriori (MAP) estimate of the interference-plus-noise covariance matrix. Numerical simulations demonstrate the superiority of the proposed method over existing methods.
Storage area networking is driving commodity data center switches to support lossless Ethernet (DCB). Unfortunately, to enable DCB for all traffic on arbitrary network topologies, we must address several problems that can arise in lossless networks, e.g., large buffering delays, unfairness, head of line blocking, and deadlock. We propose TCP-Bolt, a TCP variant that not only addresses the first three problems but reduces flow completion times by as much as 70%. We also introduce a simple, practical deadlock-free routing scheme that eliminates deadlock while achieving aggregate network throughput within 15% of ECMP routing. This small compromise in potential routing capacity is well worth the gains in flow completion time. We note that our results on deadlock-free routing are also of independent interest to the storage area networking community. Further, as our hardware testbed illustrates, these gains are achievable today, without hardware changes to switches or NICs.
Location privacy preservation has become an important issue in providing location based services (LBSs). When the mobile users report their locations to the LBS server or the third-party servers, they risk the leak of their location information if such servers are compromised. To address this issue, we propose a Location Privacy Preservation Scheme (LPPS) based on distributed cache pushing which is based on Markov Chain. The LPPS deploys distributed cache proxies in the most frequently visited areas to store the most popular location-related data and pushes them to mobile users passing by. In the way that the mobile users receive the popular location-related data from the cache proxies without reporting their real locations, the users' location privacy is well preserved, which is shown to achieve k-anonymity. Extensive experiments illustrate that the proposed LPPS achieve decent service coverage ratio and cache hit ratio with low communication overhead.
In wireless networks, spoofing attack is one of the most common and challenging attacks. Due to these attacks the overall network performance would be degraded. In this paper, a medoid based clustering approach has been proposed to detect a multiple spoofing attacks in wireless networks. In addition, a Enhanced Partitioning Around Medoid (EPAM) with average silhouette has been integrated with the clustering mechanism to detect a multiple spoofing attacks with a higher accuracy rate. Based on the proposed method, the received signal strength based clustering approach has been adopted for medoid clustering for detection of attacks. In order to prevent the multiple spoofing attacks, dynamic MAC address allocation scheme using MD5 hashing technique is implemented. The experimental results shows, the proposed method can detect spoofing attacks with high accuracy rate and prevent the attacks. Thus the overall network performance is improved with high accuracy rate.
Routers in the Content-Centric Networking (CCN) architecture maintain state for all pending content requests, so as to be able to later return the corresponding content. By employing stateful forwarding, CCN supports native multicast, enhances security and enables adaptive forwarding, at the cost of excessive forwarding state that raises scalability concerns. We propose a semi-stateless forwarding scheme in which, instead of tracking each request at every on-path router, requests are tracked at every d hops. At intermediate hops, requests gather reverse path information, which is later used to deliver responses between routers using Bloom filter-based stateless forwarding. Our approach effectively reduces forwarding state, while preserving the advantages of CCN forwarding. Evaluation results over realistic ISP topologies show that our approach reduces forwarding state by 54%-70% in unicast delivery, without any bandwidth penalties, while in multicast delivery it reduces forwarding state by 34%-55% at the expense of 6%-13% in bandwidth overhead.