Biblio
Application domains in which early performance evaluation is needed are becoming more complex. In addition to traditional measures of complexity due, for example, to the number of components, their interactions, complicated control coordination and schemes, emerging applications may require adaptive response and reconfiguration the impact of externally observable (security) parameters. In this paper we introduce an approach for effective modeling and analysis of performance and security tradeoffs. The approach identifies a suitable allocation of resources that meet performance requirements, while maximizing measurable security effects. We demonstrate this approach through the analysis of performance sensitivity of a Border Inspection Management System (BIMS) with changing security mechanisms (e.g. biometric system parameters for passenger identification). The final result is a model-based approach that allows us to take decisions about BIMS performance and security mechanisms on the basis of rates of traveler arrivals and traveler identification security guarantees. We describe the experience gained when applying this approach to daily flight arrival schedule of a real airport.
Security as a condition is the degree of resistance to, or protection from harm. Securing gadgets in a way that is simple for the user to deploy yet, stringent enough to deny any malware intrusions onto the protected circle is investigated to find a balance between the extremes. Basically, the dominant approach on current control access is via password or PIN, but its flaw is being clearly documented. An application (to be incorporated in a mobile phone) that allows the user's gadget to be used as a Biometric Capture device in addition to serve as a Biometric Signature acquisition device for processing a multi-level authentication procedure to allow access to any specific Web Service of exclusive confidentiality is proposed. To evaluate the lucidness of the proposed procedure, a specific set of domain specifications to work on are chosen and the accuracy of the Biometric face Recognition carried out is evaluated along with the compatibility of the Application developed with different sample inputs. The results obtained are exemplary compared to the existing other devices to suit a larger section of the society through the Internet for improving the security.
Cloud computing is one of the emerging computing technology where costs are directly proportional to usage and demand. The advantages of this technology are the reasons of security and privacy problems. The data belongs to the users are stored in some cloud servers which is not under their own control. So the cloud services are required to authenticate the user. In general, most of the cloud authentication algorithms do not provide anonymity of the users. The cloud provider can track the users easily. The privacy and authenticity are two critical issues of cloud security. In this paper, we propose a secure anonymous authentication method for cloud services using identity based group signature which allows the cloud users to prove that they have privilege to access the data without revealing their identities.
Recently, cloud computing has been spotlighted as a new paradigm of database management system. In this environment, databases are outsourced and deployed on a service provider in order to reduce cost for data storage and maintenance. However, the service provider might be untrusted so that the two issues of data security, including data confidentiality and query result integrity, become major concerns for users. Existing bucket-based data authentication methods have problem that the original spatial data distribution can be disclosed from data authentication index due to the unsophisticated data grouping strategies. In addition, the transmission overhead of verification object is high. In this paper, we propose a privacy-aware query authentication which guarantees data confidentiality and query result integrity for users. A periodic function-based data grouping scheme is designed to privately partition a spatial database into small groups for generating a signature of each group. The group signature is used to check the correctness and completeness of outsourced data when answering a range query to users. Through performance evaluation, it is shown that proposed method outperforms the existing method in terms of range query processing time up to 3 times.
Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.
Techniques for network security analysis have historically focused on the actions of the network hosts. Outside of forensic analysis, little has been done to detect or predict malicious or infected nodes strictly based on their association with other known malicious nodes. This methodology is highly prevalent in the graph analytics world, however, and is referred to as community detection. In this paper, we present a method for detecting malicious and infected nodes on both monitored networks and the external Internet. We leverage prior community detection and graphical modeling work by propagating threat probabilities across network nodes, given an initial set of known malicious nodes. We enhance prior work by employing constraints that remove the adverse effect of cyclic propagation that is a byproduct of current methods. We demonstrate the effectiveness of probabilistic threat propagation on the tasks of detecting botnets and malicious web destinations.
The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.
In the paper a programmable management framework for SDN networks is presented. The concept is in-line with SDN philosophy - it can be programmed from scratch. The implemented management functions can be case dependent. The concept introduces a new node in the SDN architecture, namely the SDN manager. In compliance with the latest trends in network management the approach allows for embedded management of all network nodes and gradual implementation of management functions providing their code lifecycle management as well as the ability to on-the-fly code update. The described concept is a bottom-up approach, which key element is distributed execution environment (PDEE) that is based on well-established technologies like OSGI and FIPA. The described management idea has strong impact on the evolution of the SDN architecture, because the proposed distributed execution environment is a generic one, therefore it can be used not only for the management, but also for distributing of control or application functions.
A novel physical layer authentication scheme is proposed in this paper by exploiting the time-varying carrier frequency offset (CFO) associated with each pair of wireless communications devices. In realistic scenarios, radio frequency oscillators in each transmitter-and-receiver pair always present device-dependent biases to the nominal oscillating frequency. The combination of these biases and mobility-induced Doppler shift, characterized as a time-varying CFO, can be used as a radiometric signature for wireless device authentication. In the proposed authentication scheme, the variable CFO values at different communication times are first estimated. Kalman filtering is then employed to predict the current value by tracking the past CFO variation, which is modeled as an autoregressive random process. To achieve the proposed authentication, the current CFO estimate is compared with the Kalman predicted CFO using hypothesis testing to determine whether the signal has followed a consistent CFO pattern. An adaptive CFO variation threshold is derived for device discrimination according to the signal-to-noise ratio and the Kalman prediction error. In addition, a software-defined radio (SDR) based prototype platform has been developed to validate the feasibility of using CFO for authentication. Simulation results further confirm the effectiveness of the proposed scheme in multipath fading channels.
We propose an optical security method for object authentication using photon-counting encryption implemented with phase encoded QR codes. By combining the full phase double-random-phase encryption with photon-counting imaging method and applying an iterative Huffman coding technique, we are able to encrypt and compress an image containing primary information about the object. This data can then be stored inside of an optically phase encoded QR code for robust read out, decryption, and authentication. The optically encoded QR code is verified by examining the speckle signature of the optical masks using statistical analysis. Optical experimental results are presented to demonstrate the performance of the system. In addition, experiments with a commercial Smartphone to read the optically encoded QR code are presented. To the best of our knowledge, this is the first report on integrating photon-counting security with optically phase encoded QR codes.
Mobile Ad-Hoc Networks (MANET) consist of peer-to-peer infrastructure less communicating nodes that are highly dynamic. As a result, routing data becomes more challenging. Ultimately routing protocols for such networks face the challenges of random topology change, nature of the link (symmetric or asymmetric) and power requirement during data transmission. Under such circumstances both, proactive as well as reactive routing are usually inefficient. We consider, zone routing protocol (ZRP) that adds the qualities of the proactive (IARP) and reactive (IERP) protocols. In ZRP, an updated topological map of zone centered on each node, is maintained. Immediate routes are available inside each zone. In order to communicate outside a zone, a route discovery mechanism is employed. The local routing information of the zones helps in this route discovery procedure. In MANET security is always an issue. It is possible that a node can turn malicious and hamper the normal flow of packets in the MANET. In order to overcome such issue we have used a clustering technique to separate the nodes having intrusive behavior from normal behavior. We call this technique as effective k-means clustering which has been motivated from k-means. We propose to implement Intrusion Detection System on each node of the MANET which is using ZRP for packet flow. Then we will use effective k-means to separate the malicious nodes from the network. Thus, our Ad-Hoc network will be free from any malicious activity and normal flow of packets will be possible.
In the United States, the number of Phasor Measurement Units (PMU) will increase from 166 networked devices in 2010 to 1043 in 2014. According to the Department of Energy, they are being installed in order to “evaluate and visualize reliability margin (which describes how close the system is to the edge of its stability boundary).” However, there is still a lot of debate in academia and industry around the usefulness of phase angles as unambiguous predictors of dynamic stability. In this paper, using 4-year of actual data from Hydro-Québec EMS, it is shown that phase angles enable satisfactory predictions of power transfer and dynamic security margins across critical interface using random forest models, with both explanation level and R-squares accuracy exceeding 99%. A generalized linear model (GLM) is next implemented to predict phase angles from day-ahead to hour-ahead time frames, using historical phase angles values and load forecast. Combining GLM based angles forecast with random forest mapping of phase angles to power transfers result in a new data-driven approach for dynamic security monitoring.
In wireless networks, spoofing attack is one of the most common and challenging attacks. Due to these attacks the overall network performance would be degraded. In this paper, a medoid based clustering approach has been proposed to detect a multiple spoofing attacks in wireless networks. In addition, a Enhanced Partitioning Around Medoid (EPAM) with average silhouette has been integrated with the clustering mechanism to detect a multiple spoofing attacks with a higher accuracy rate. Based on the proposed method, the received signal strength based clustering approach has been adopted for medoid clustering for detection of attacks. In order to prevent the multiple spoofing attacks, dynamic MAC address allocation scheme using MD5 hashing technique is implemented. The experimental results shows, the proposed method can detect spoofing attacks with high accuracy rate and prevent the attacks. Thus the overall network performance is improved with high accuracy rate.
This paper proposes an algorithm for multi-channel SAR ground moving target detection and estimation using the Fractional Fourier Transform(FrFT). To detect the moving target with low speed, the clutter is first suppressed by Displace Phase Center Antenna(DPCA), then the signal-to-clutter can be enhanced. Have suppressed the clutter, the echo of moving target remains and can be regarded as a chirp signal whose parameters can be estimated by FrFT. FrFT, one of the most widely used tools to time-frequency analysis, is utilized to estimate the Doppler parameters, from which the moving parameters, including the velocity and the acceleration can be obtained. The effectiveness of the proposed method is validated by the simulation.
In an attempt to support customization, many web applications allow the integration of third-party server-side plugins that offer diverse functionality, but also open an additional door for security vulnerabilities. In this paper we study the use of static code analysis tools to detect vulnerabilities in the plugins of the web application. The goal is twofold: 1) to study the effectiveness of static analysis on the detection of web application plugin vulnerabilities, and 2) to understand the potential impact of those plugins in the security of the core web application. We use two static code analyzers to evaluate a large number of plugins for a widely used Content Manage-ment System. Results show that many plugins that are current-ly deployed worldwide have dangerous Cross Site Scripting and SQL Injection vulnerabilities that can be easily exploited, and that even widely used static analysis tools may present disappointing vulnerability coverage and false positive rates.
The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.
This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.
A scheme for preserving privacy in MobilityFirst (MF) clean-slate future Internet architecture is proposed in this paper. The proposed scheme, called Anonymity in MobilityFirst (AMF), utilizes the three-tiered approach to effectively exploit the inherent properties of MF Network such as Globally Unique Flat Identifier (GUID) and Global Name Resolution Service (GNRS) to provide anonymity to the users. While employing new proposed schemes in exchanging of keys between different tiers of routers to alleviate trust issues, the proposed scheme uses multiple routers in each tier to avoid collaboration amongst the routers in the three tiers to expose the end users.
Network virtualization sits firmly on the Internet evolutionary path allowing researchers to experiment with novel clean-slate designs over the production network and practitioners to manage multi-tenants infrastructures in a flexible and scalable manner. In such scenarios, isolation between virtual networks is often intended as purely logical: this is the case of address space isolation or flow space isolation. This approach neglects the effect that network virtualization has on resource allocation network-wide. In this work we investigate the price paid by a purely logical approach in terms of performance degradation. This performance loss is paid by the actual users of a multi-tenants datacenter network. We propose a solution to this problem leveraging on a new network virtualization primitive, namely an online link utilization feedback mechanism. It provides each tenant with the necessary information to make efficient use of network resources. We evaluate our solution trough a real implementation exploiting the OpenFlow protocol. Empirical results confirm that the proposed scheme is able to support tenants in exploiting virtualized network resources effectively.
The future Internet has been a hot topic during the past decade and many approaches towards this future Internet, ranging from incremental evolution to complete clean slate ones, have been proposed. One of the proposition, LISP, advocates for the separation of the identifier and the locator roles of IP addresses to reduce BGP churn and BGP table size. Up to now, however, most studies concerning LISP have been theoretical and, in fact, little is known about the actual LISP deployment performance. In this paper, we fill this gap through measurement campaigns carried out on the LISP Beta Network. More precisely, we evaluate the performance of the two key components of the infrastructure: the control plane (i.e., the mapping system) and the interworking mechanism (i.e., communication between LISP and non-LISP sites). Our measurements highlight that performance offered by the LISP interworking infrastructure is strongly dependent on BGP routing policies. If we exclude misconfigured nodes, the mapping system typically provides reliable performance and relatively low median mapping resolution delays. Although the bias is not very important, control plane performance favors USA sites as a result of its larger LISP user base but also because European infrastructure appears to be less reliable.
Big data's explosive growth has prompted the US government to release new reports that address the issues--particularly related to privacy--resulting from this growth. The Web extra at http://youtu.be/j49eoe5g8-c is an audio recording from the Computing and the Law column, in which authors Brian M. Gaff, Heather Egan Sussman, and Jennifer Geetter discuss how big data's explosive growth has prompted the US government to release new reports that address the issues--particularly related to privacy--resulting from this growth.
Recently, cloud computing has been spotlighted as a new paradigm of database management system. In this environment, databases are outsourced and deployed on a service provider in order to reduce cost for data storage and maintenance. However, the service provider might be untrusted so that the two issues of data security, including data confidentiality and query result integrity, become major concerns for users. Existing bucket-based data authentication methods have problem that the original spatial data distribution can be disclosed from data authentication index due to the unsophisticated data grouping strategies. In addition, the transmission overhead of verification object is high. In this paper, we propose a privacy-aware query authentication which guarantees data confidentiality and query result integrity for users. A periodic function-based data grouping scheme is designed to privately partition a spatial database into small groups for generating a signature of each group. The group signature is used to check the correctness and completeness of outsourced data when answering a range query to users. Through performance evaluation, it is shown that proposed method outperforms the existing method in terms of range query processing time up to 3 times.
This paper has conducted analyzing the accident case of data spill to study policy issues for ICT security from a social science perspective focusing on risk. The results from case analysis are as follows. First, ICT risk can be categorized 'severe, strong, intensive and individual' from the level of both probability and impact. Second, strategy of risk management can be designated 'avoid, transfer, mitigate, accept' by understanding their own culture type of relative group such as 'hierarchy, egalitarianism, fatalism and individualism'. Third, personal data has contained characteristics of big data such like 'volume, velocity, variety' for each risk situation. Therefore, government needs to establish a standing organization responsible for ICT risk policy and management in a new big data era. And the policy for ICT risk management needs to balance in considering 'technology, norms, laws, and market' in big data era.
In recent years, Attribute Based Access Control (ABAC) has evolved as the preferred logical access control methodology in the Department of Defense and Intelligence Community, as well as many other agencies across the federal government. Gartner recently predicted that “by 2020, 70% of enterprises will use attribute-based access control (ABAC) as the dominant mechanism to protect critical assets, up from less that 5% today.” A definition and introduction to ABAC can be found in NIST Special Publication 800-162, Guide to Attribute Based Access Control (ABAC) Definition and Considerations and Intelligence Community Policy Guidance (ICPG) 500.2, Attribute-Based Authorization and Access Management. Within ABAC, attributes are used to make critical access control decisions, yet standards for attribute assurance have just started to be researched and documented. This presentation outlines factors influencing attributes that an authoritative body must address when standardizing attribute assurance and proposes some notional implementation suggestions for consideration. Attribute Assurance brings a level of confidence to attributes that is similar to levels of assurance for authentication (e.g., guidelines specified in NIST SP 800-63 and OMB M-04-04). There are three principal areas of interest when considering factors related to Attribute Assurance. Accuracy establishes the policy and technical underpinnings for semantically and syntactically correct descriptions of Subjects, Objects, or Environmental conditions. Interoperability considers different standards and protocols used for secure sharing of attributes between systems in order to avoid compromising the integrity and confidentiality of the attributes or exposing vulnerabilities in provider or relying systems or entities. Availability ensures that the update and retrieval of attributes satisfy the application to which the ABAC system is applied. In addition, the security and backup capability of attribute repositories need to be considered. Similar to a Level of Assurance (LOA), a Level of Attribute Assurance (LOAA) assures a relying party that the attribute value received from an Attribute Provider (AP) is accurately associated with the subject, resource, or environmental condition to which it applies. An Attribute Provider (AP) is any person or system that provides subject, object (or resource), or environmental attributes to relying parties regardless of transmission method. The AP may be the original, authoritative source (e.g., an Applicant). The AP may also receive information from an authoritative source for repacking or store-and-forward (e.g., an employee database) to relying parties or they may derive the attributes from formulas (e.g., a credit score). Regardless of the source of the AP's attributes, the same standards should apply to determining the LOAA. As ABAC is implemented throughout government, attribute assurance will be a critical, limiting factor in its acceptance. With this presentation, we hope to encourage dialog between attribute relying parties, attribute providers, and federal agencies that will be defining standards for ABAC in the immediate future.