Biblio
To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
Mobile ad-hoc networks are a new field in networking because it works as an autonomous network. Application of mobile ad-hoc networks are increasing day by day in recent year now a days. So it important is increasing to provide suitable routing protocol and security from attacker. Mobile ad-hoc network now a days faces many problems such as small bandwidth, energy, security, limited computational and high mobility. The main problem in mobile ad-hoc networks is that wireless networks, Infrastructure wireless networks have larger bandwidth, larger memory, power backup and different routing protocol easily applies. But in case of mobile ad-hoc networks some of these application failed due to mobility and small power backup so it is required such type of routing protocol which is take small energy during the transfer of packet. So we see that still there are many challenging works in mobile ad-hoc networks remained and to research in this area related to routing protocol, security issues, solving energy problem and many more which is feasible to it. Our research most probably will be dedicated to Authentication in mobile ad-hoc network.
The distinctive features of mobile ad hoc networks (MANETs), including dynamic topology and open wireless medium, may lead to MANETs suffering from many security vulnerabilities. In this paper, using recent advances in uncertain reasoning that originated from the artificial intelligence community, we propose a unified trust management scheme that enhances the security in MANETs. In the proposed trust management scheme, the trust model has two components: trust from direct observation and trust from indirect observation. With direct observation from an observer node, the trust value is derived using Bayesian inference, which is a type of uncertain reasoning when the full probability model can be defined. On the other hand, with indirect observation, which is also called secondhand information that is obtained from neighbor nodes of the observer node, the trust value is derived using the Dempster-Shafer theory (DST), which is another type of uncertain reasoning when the proposition of interest can be derived by an indirect method. By combining these two components in the trust model, we can obtain more accurate trust values of the observed nodes in MANETs. We then evaluate our scheme under the scenario of MANET routing. Extensive simulation results show the effectiveness of the proposed scheme. Specifically, throughput and packet delivery ratio (PDR) can be improved significantly with slightly increased average end-to-end delay and overhead of messages.
Game theory can provide a useful tool to study the security problem in mobile ad hoc networks (MANETs). Most of existing works on applying game theories to security only consider two players in the security game model: an attacker and a defender. While this assumption may be valid for a network with centralized administration, it is not realistic in MANETs, where centralized administration is not available. In this paper, using recent advances in mean field game theory, we propose a novel game theoretic approach with multiple players for security in MANETs. The mean field game theory provides a powerful mathematical tool for problems with a large number of players. The proposed scheme can enable an individual node in MANETs to make strategic security defence decisions without centralized administration. In addition, since security defence mechanisms consume precious system resources (e.g., energy), the proposed scheme considers not only the security requirement of MANETs but also the system resources. Moreover, each node in the proposed scheme only needs to know its own state information and the aggregate effect of the other nodes in the MANET. Therefore, the proposed scheme is a fully distributed scheme. Simulation results are presented to illustrate the effectiveness of the proposed scheme.
The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.
A distributed cyber control system comprises various types of assets, including sensors, intrusion detection systems, scanners, controllers, and actuators. The modeling and analysis of these components usually require multi-disciplinary approaches. This paper presents a modeling and dynamic analysis of a distributed cyber control system for situational awareness by taking advantage of control theory and time Petri net. Linear time-invariant systems are used to model the target system, attacks, assets influences, and an anomaly-based intrusion detection system. Time Petri nets are used to model the impact and timing relationships of attacks, vulnerability, and recovery at every node. To characterize those distributed control systems that are perfectly attackable, algebraic and topological attackability conditions are derived. Numerical evaluation is performed to determine the impact of attacks on distributed control system.
The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.
In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way.
The time delay of echo generated by the moving target simulator based on digital delay technique is discrete. So there are range and phase errors between the simulated target and real target, and the simulated target will move discontinuously due to the discrete time delay. In order to solve this problem and generate a continuously moving target, this paper uses signal processing technique to adjust the range and phase errors between the two targets. By adjusting the range gate, the time delay error is reduced to be smaller than sampling interval. According to the relationship between range and phase, the left error within one range bin can be removed equivalently by phase compensation. The simulation results show that by adjusting the range gate, the time delay errors are greatly reduced, and the left errors can be removed by phase compensation. In other words, a real continuously moving target is generated and the problem is solved.
This paper proposes an algorithm for multi-channel SAR ground moving target detection and estimation using the Fractional Fourier Transform(FrFT). To detect the moving target with low speed, the clutter is first suppressed by Displace Phase Center Antenna(DPCA), then the signal-to-clutter can be enhanced. Have suppressed the clutter, the echo of moving target remains and can be regarded as a chirp signal whose parameters can be estimated by FrFT. FrFT, one of the most widely used tools to time-frequency analysis, is utilized to estimate the Doppler parameters, from which the moving parameters, including the velocity and the acceleration can be obtained. The effectiveness of the proposed method is validated by the simulation.
Since the massive deployment of Cyber-Physical Systems (CPSs) calls for long-range and reliable communication services with manageable cost, it has been believed to be an inevitable trend to relay a significant portion of CPS traffic through existing networking infrastructures such as the Internet. Adversaries who have access to networking infrastructures can therefore eavesdrop network traffic and then perform traffic analysis attacks in order to identify CPS sessions and subsequently launch various attacks. As we can hardly prevent all adversaries from accessing network infrastructures, thwarting traffic analysis attacks becomes indispensable. Traffic morphing serves as an effective means towards this direction. In this paper, a novel traffic morphing algorithm, CPSMorph, is proposed to protect CPS sessions. CPSMorph maintains a number of network sessions whose distributions of inter-packet delays are statistically indistinguishable from those of typical network sessions. A CPS message will be sent through one of these sessions with assured satisfaction of its time constraint. CPSMorph strives to minimize the overhead by dynamically adjusting the morphing process. It is characterized by low complexity as well as high adaptivity to changing dynamics of CPS sessions. Experimental results have shown that CPSMorph can effectively performing traffic morphing for real-time CPS messages with moderate overhead.
Port hopping is a typical moving target defense, which constantly changes service port number to thwart reconnaissance attack. It is effective in hiding service identities and confusing potential attackers, but it is still unknown how effective port hopping is and under what circumstances it is a viable proactive defense because the existed works are limited and they usually discuss only a few parameters and give some empirical studies. This paper introduces urn model and quantifies the likelihood of attacker success in terms of the port pool size, number of probes, number of vulnerable services, and hopping frequency. Theoretical analysis shows that port hopping is an effective and promising proactive defense technology in thwarting network attacks.
Recent years, HTML5 is widely adopted in popular browsers. Unfortunately, as a new Web standard, HTML5 may expand the Cross Site Scripting (XSS) attack surface as well as improve the interactivity of the page. In this paper, we identified 14 XSS attack vectors related to HTML5 by a systematic analysis about new tags and attributes. Based on these vectors, a XSS test vector repository is constructed and a dynamic XSS vulnerability detection tool focusing on Webmail systems is implemented. By applying the tool to some popular Webmail systems, seven exploitable XSS vulnerabilities are found. The evaluation result shows that our tool can efficiently detect XSS vulnerabilities introduced by HTML5.
With the rapid development of information technology, information security management is ever more important. OpenSSL security incident told us, there's distinct disadvantages of security management of current hierarchical structure, the software and hardware facilities are necessary to enforce security management on their implements of crucial basic protocols, in order to ease the security threats against the facilities in a certain extent. This article expounded cross-layer security management and enumerated 5 contributory factors for the core problems that management facing to.
This paper proposed a MIMO cross-layer precoding secure communications via pattern controlled by higher layer cryptography. By contrast to physical layer security system, the proposed scheme could enhance the security in adverse situations where the physical layer security hardly to be deal with. Two One typical situation is considered. One is that the attackers have the ideal CSI and another is eavesdropper's channel are highly correlated to legitimate channel. Our scheme integrates the upper layer with physical layer secure together to gaurantee the security in real communication system. Extensive theoretical analysis and simulations are conducted to demonstrate its effectiveness. The proposed method is feasible to spread in many other communicate scenarios.
Virtualized environments are widely thought to cause problems for software-based random number generators (RNGs), due to use of virtual machine (VM) snapshots as well as fewer and believed-to-be lower quality entropy sources. Despite this, we are unaware of any published analysis of the security of critical RNGs when running in VMs. We fill this gap, using measurements of Linux's RNG systems (without the aid of hardware RNGs, the most common use case today) on Xen, VMware, and Amazon EC2. Despite CPU cycle counters providing a significant source of entropy, various deficiencies in the design of the Linux RNG makes its first output vulnerable during VM boots and, more critically, makes it suffer from catastrophic reset vulnerabilities. We show cases in which the RNG will output the exact same sequence of bits each time it is resumed from the same snapshot. This can compromise, for example, cryptographic secrets generated after resumption. We explore legacy-compatible countermeasures, as well as a clean-slate solution. The latter is a new RNG called Whirlwind that provides a simpler, more-secure solution for providing system randomness.
Traffic from mobile wireless networks has been growing at a fast pace in recent years and is expected to surpass wired traffic very soon. Service providers face significant challenges at such scales including providing seamless mobility, efficient data delivery, security, and provisioning capacity at the wireless edge. In the Mobility First project, we have been exploring clean slate enhancements to the network protocols that can inherently provide support for at-scale mobility and trustworthiness in the Internet. An extensible data plane using pluggable compute-layer services is a key component of this architecture. We believe these extensions can be used to implement in-network services to enhance mobile end-user experience by either off-loading work and/or traffic from mobile devices, or by enabling en-route service-adaptation through context-awareness (e.g., Knowing contemporary access bandwidth). In this work we present details of the architectural support for in-network services within Mobility First, and propose protocol and service-API extensions to flexibly address these pluggable services from end-points. As a demonstrative example, we implement an in network service that does rate adaptation when delivering video streams to mobile devices that experience variable connection quality. We present details of our deployment and evaluation of the non-IP protocols along with compute-layer extensions on the GENI test bed, where we used a set of programmable nodes across 7 distributed sites to configure a Mobility First network with hosts, routers, and in-network compute services.
It is expected that clean-slate network designs will be implemented for wide-area network applications. Multi-tenancy in OpenFlow networks is an effective method to supporting a clean-slate network design, because the cost-effectiveness is improved by the sharing of substrate networks. To guarantee the programmability of OpenFlow for tenants, a complete flow space (i.e., header values of the data packets) virtualization is necessary. Wide-area substrate networks typically have multiple administrators. We therefore need to implement a flow space virtualization over multiple administration networks. In existing techniques, a third party is solely responsible for managing the mapping of header values for flow space virtualization for substrate network administrators and tenants, despite the severity of a third party failure. In this paper, we propose an AutoVFlow mechanism that allows flow space virtualization in a wide-area networks without the need for a third party. Substrate network administrators implement a flow space virtualization autonomously. They are responsible for virtualizing a flow space involving switches in their own substrate networks. Using a prototype of AutoVFlow, we measured the virtualization overhead, the results of which show a negligible amount of overhead.
This paper presents a survey on cyber security issues in in current industrial automation and control systems, which also includes observations and insights collected and distilled through a series of discussion by some of major Japanese experts in this field. It also tries to provide a conceptual framework of those issues and big pictures of some ongoing projects to try to enhance it.
The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.
With the advent of social networks and cloud computing, the amount of multimedia data produced and communicated within social networks is rapidly increasing. In the mean time, social networking platform based on cloud computing has made multimedia big data sharing in social network easier and more efficient. The growth of social multimedia, as demonstrated by social networking sites such as Facebook and YouTube, combined with advances in multimedia content analysis, underscores potential risks for malicious use such as illegal copying, piracy, plagiarism, and misappropriation. Therefore, secure multimedia sharing and traitor tracing issues have become critical and urgent in social network. In this paper, we propose a scheme for implementing the Tree-Structured Harr (TSH) transform in a homomorphic encrypted domain for fingerprinting using social network analysis with the purpose of protecting media distribution in social networks. The motivation is to map hierarchical community structure of social network into tree structure of TSH transform for JPEG2000 coding, encryption and fingerprinting. Firstly, the fingerprint code is produced using social network analysis. Secondly, the encrypted content is decomposed by the TSH transform. Thirdly, the content is fingerprinted in the TSH transform domain. At last, the encrypted and fingerprinted contents are delivered to users via hybrid multicast-unicast. The use of fingerprinting along with encryption can provide a double-layer of protection to media sharing in social networks. Theory analysis and experimental results show the effectiveness of the proposed scheme.
This paper presents a unified approach for the detection of network anomalies. Current state of the art methods are often able to detect one class of anomalies at the cost of others. Our approach is based on using a Linear Dynamical System (LDS) to model network traffic. An LDS is equivalent to Hidden Markov Model (HMM) for continuous-valued data and can be computed using incremental methods to manage high-throughput (volume) and velocity that characterizes Big Data. Detailed experiments on synthetic and real network traces shows a significant improvement in detection capability over competing approaches. In the process we also address the issue of robustness of network anomaly detection systems in a principled fashion.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.