Biblio
Recent years, HTML5 is widely adopted in popular browsers. Unfortunately, as a new Web standard, HTML5 may expand the Cross Site Scripting (XSS) attack surface as well as improve the interactivity of the page. In this paper, we identified 14 XSS attack vectors related to HTML5 by a systematic analysis about new tags and attributes. Based on these vectors, a XSS test vector repository is constructed and a dynamic XSS vulnerability detection tool focusing on Webmail systems is implemented. By applying the tool to some popular Webmail systems, seven exploitable XSS vulnerabilities are found. The evaluation result shows that our tool can efficiently detect XSS vulnerabilities introduced by HTML5.
Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.
Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.
Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.
Recently, there has been great interest in the physical layer security technique which exploits the artificial noise (AN) to enlarge the channel condition between the legitimate receiver and the eavesdropper. However, in certain communication scenery, this strategy may suffer from some attacks in the signal processing perspective. In this paper, we consider speech signals and the scenario in which the eavesdropper has the similar channel performance compared to the legitimate receiver. We design the optimal artificial noise (AN) to resist the attack of the eavesdropper who uses the blind source separation (BSS) technology to reconstruct the secret information. The Optimal AN is obtained by making a tradeoff between results of direct eavesdropping and reconstruction. The simulation results show that the AN we proposed has better performance than that of the white Gaussian AN to resist the BSS attacks effectively.
Datacenter-based Cloud computing has induced new disruptive trends in networking, key among which is network virtualization. Software-Defined Networking overlays aim to improve the efficiency of the next generation multitenant datacenters. While early overlay prototypes are already available, they focus mainly on core functionality, with little being known yet about their impact on the system level performance. Using query completion time as our primary performance metric, we evaluate the overlay network impact on two representative datacenter workloads, Partition/Aggregate and 3-Tier. We measure how much performance is traded for overlay's benefits in manageability, security and policing. Finally, we aim to assist the datacenter architects by providing a detailed evaluation of the key overlay choices, all made possible by our accurate cross-layer hybrid/mesoscale simulation platform.
Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow them to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions.
Future networks may change the way how network administrators monitor and account their users. History shows that usually a completely new design (clean slate) is used to propose a new network architecture - e.g. Network Control Protocol to TCP/IP, IPv4 to IPv6 or IP to Recursive Inter Network Architecture. The incompatibility between these architectures changes the user accounting process as network administrators have to use different information to identify a user. The paper presents a methodology how it is possible to gather all necessary information needed for smooth transition between two incompatible architectures. The transition from IPv4 and IPv6 is used as a use case, but it should be able to use the same process with any new networking architecture.
A key challenge of future mobile communication research is to strike an attractive compromise between wireless network's area spectral efficiency and energy efficiency. This necessitates a clean-slate approach to wireless system design, embracing the rich body of existing knowledge, especially on multiple-input-multiple-ouput (MIMO) technologies. This motivates the proposal of an emerging wireless communications concept conceived for single-radio-frequency (RF) large-scale MIMO communications, which is termed as SM. The concept of SM has established itself as a beneficial transmission paradigm, subsuming numerous members of the MIMO system family. The research of SM has reached sufficient maturity to motivate its comparison to state-of-the-art MIMO communications, as well as to inspire its application to other emerging wireless systems such as relay-aided, cooperative, small-cell, optical wireless, and power-efficient communications. Furthermore, it has received sufficient research attention to be implemented in testbeds, and it holds the promise of stimulating further vigorous interdisciplinary research in the years to come. This tutorial paper is intended to offer a comprehensive state-of-the-art survey on SM-MIMO research, to provide a critical appraisal of its potential advantages, and to promote the discussion of its beneficial application areas and their research challenges leading to the analysis of the technological issues associated with the implementation of SM-MIMO. The paper is concluded with the description of the world's first experimental activities in this vibrant research field.
Programming languages have long incorporated type safety, increasing their level of abstraction and thus aiding programmers. Type safety eliminates whole classes of security-sensitive bugs, replacing the tedious and error-prone search for such bugs in each application with verifying the correctness of the type system. Despite their benefits, these protections often end at the process boundary, that is, type safety holds within a program but usually not to the file system or communication with other programs. Existing operating system approaches to bridge this gap require the use of a single programming language or common language runtime. We describe the deep integration of type safety in Ethos, a clean-slate operating system which requires that all program input and output satisfy a recognizer before applications are permitted to further process it. Ethos types are multilingual and runtime-agnostic, and each has an automatically generated unique type identifier. Ethos bridges the type-safety gap between programs by (1) providing a convenient mechanism for specifying the types each program may produce or consume, (2) ensuring that each type has a single, distributed-system-wide recognizer implementation, and (3) inescapably enforcing these type constraints.
The emergence of new technologies, in addition with the popularization of mobile devices and wireless communication systems, demands a variety of requirements that current Internet is not able to comply adequately. In this scenario, the innovative information-centric Entity Title Architecture (ETArch), a Future Internet (FI) clean slate approach, was design to efficiently cope with the increasing demand of beyond-IP networking services. Nevertheless, despite all ETArch capabilities, it was not projected with reliable networking functions, which limits its operability in mobile multimedia networking, and will seriously restrict its scope in Future Internet scenarios. Therefore, our work extends ETArch mobility control with advanced quality-oriented mobility functions, to deploy mobility prediction, Point of Attachment (PoA) decision and handover setup meeting both session quality requirements of active session flows and current wireless quality conditions of neighbouring PoA candidates. The effectiveness of the proposed additions were confirmed through a preliminary evaluation carried out by MATLAB, in which we have considered distinct applications scenario, and showed that they were able to outperform the most relevant alternative solutions in terms of performance and quality of service.
With the advent of social networks and cloud computing, the amount of multimedia data produced and communicated within social networks is rapidly increasing. In the mean time, social networking platform based on cloud computing has made multimedia big data sharing in social network easier and more efficient. The growth of social multimedia, as demonstrated by social networking sites such as Facebook and YouTube, combined with advances in multimedia content analysis, underscores potential risks for malicious use such as illegal copying, piracy, plagiarism, and misappropriation. Therefore, secure multimedia sharing and traitor tracing issues have become critical and urgent in social network. In this paper, we propose a scheme for implementing the Tree-Structured Harr (TSH) transform in a homomorphic encrypted domain for fingerprinting using social network analysis with the purpose of protecting media distribution in social networks. The motivation is to map hierarchical community structure of social network into tree structure of TSH transform for JPEG2000 coding, encryption and fingerprinting. Firstly, the fingerprint code is produced using social network analysis. Secondly, the encrypted content is decomposed by the TSH transform. Thirdly, the content is fingerprinted in the TSH transform domain. At last, the encrypted and fingerprinted contents are delivered to users via hybrid multicast-unicast. The use of fingerprinting along with encryption can provide a double-layer of protection to media sharing in social networks. Theory analysis and experimental results show the effectiveness of the proposed scheme.
Big data's explosive growth has prompted the US government to release new reports that address the issues--particularly related to privacy--resulting from this growth. The Web extra at http://youtu.be/j49eoe5g8-c is an audio recording from the Computing and the Law column, in which authors Brian M. Gaff, Heather Egan Sussman, and Jennifer Geetter discuss how big data's explosive growth has prompted the US government to release new reports that address the issues--particularly related to privacy--resulting from this growth.
For the first time in the history of humanity, more them half of the population is now living in big cities. This scenario has raised concerns related systems that provide basic services to citizens. Even more, those systems has now the responsibility to empower the citizen with information and values that may aid people on daily decisions, such as related to education, transport, healthy and others. This environment creates a set of services that, interconnected, can develop a brand new range of solutions that refers to a term often called System of Systems. In this matter, focusing in a smart city, new challenges related to information security raises, those concerns may go beyond the concept of privacy issues exploring situations where the entire environment could be affected by issues different them only break the confidentiality of a data. This paper intends to discuss and propose 9 security issues that can be part of a smart city environment, and that explores more them just citizens privacy violations.
Multiple Inductive Loop Detectors are advanced Inductive Loop Sensors that can measure traffic flow parameters in even conditions where the traffic is heterogeneous and does not conform to lanes. This sensor consists of many inductive loops in series, with each loop having a parallel capacitor across it. These inductive and capacitive elements of the sensor may undergo open or short circuit faults during operation. Such faults lead to erroneous interpretation of data acquired from the loops. Conventional methods used for fault diagnosis in inductive loop detectors consume time and effort as they require experienced technicians and involve extraction of loops from the saw-cut slots on the road. This also means that the traffic flow parameters cannot be measured until the sensor system becomes functional again. The repair activities would also disturb traffic flow. This paper presents a method for automating fault diagnosis for series-connected Multiple Inductive Loop Detectors, based on an impulse test. The system helps in the diagnosis of open/short faults associated with the inductive and capacitive elements of the sensor structure by displaying the fault status conveniently. Since the fault location as well as the fault type can be precisely identified using this method, the repair actions are also localised. The proposed system thereby results in significant savings in both repair time and repair costs. An embedded system was developed to realize this scheme and the same was tested on a loop prototype.
This paper proposes an enhanced method for personal authentication based on finger Knuckle Print using Kekre's wavelet transform (KWT). Finger-knuckle-print (FKP) is the inherent skin patterns of the outer surface around the phalangeal joint of one's finger. It is highly discriminable and unique which makes it an emerging promising biometric identifier. Kekre's wavelet transform is constructed from Kekre's transform. The proposed system is evaluated on prepared FKP database that involves all categories of FKP. The total database of 500 samples of FKP. This paper focuses the different image enhancement techniques for the pre-processing of the captured images. The proposed algorithm is examined on 350 training and 150 testing samples of database and shows that the quality of database and pre-processing techniques plays important role to recognize the individual. The experimental result calculate the performance parameters like false acceptance rate (FAR), false rejection rate (FRR), True Acceptance rate (TAR), True rejection rate (TRR). The tested result demonstrated the improvement in EER (Error Equal Rate) which is very much important for authentication. The experimental result using Kekre's algorithm along with image enhancement shows that the finger knuckle recognition rate is better than the conventional method.
Wireless network, whether it's ad-hoc or at enterprise level is vulnerable due to its features of open medium, and usually due to weak authentication, authorization, encryption, monitoring and accounting mechanisms. Various wireless vulnerability situations as well as the minimal features that are required in order to protect, monitor, account, authenticate, and authorize nodes, users, computers into the network are examined. Also, aspects of several IEEE Security Standards, which were ratified and which are still in draft are described.
This paper presents FlowNAC, a Flow-based Network Access Control solution that allows to grant users the rights to access the network depending on the target service requested. Each service, defined univocally as a set of flows, can be independently requested and multiple services can be authorized simultaneously. Building this proposal over SDN principles has several benefits: SDN adds the appropriate granularity (fine-or coarse-grained) depending on the target scenario and flexibility to dynamically identify the services at data plane as a set of flows to enforce the adequate policy. FlowNAC uses a modified version of IEEE 802.1X (novel EAPoL-in-EAPoL encapsulation) to authenticate the users (without the need of a captive portal) and service level access control based on proactive deployment of flows (instead of reactive). Explicit service request avoids misidentifying the target service, as it could happen by analyzing the traffic (e.g. private services). The proposal is evaluated in a challenging scenario (concurrent authentication and authorization processes) with promising results.
This paper designs a secure transmission and authorization management system which based on the principles of Public Key Infrastructure and Rose-Based Access Control. It can solve the problems of identity authentication, secure transmission and access control on internet. In the first place, according to PKI principles, certificate authority system is implemented. It can issue and revoke the server-side and client-side digital certificate. Data secure transmission is achieved through the combination of digital certificate and SSL protocol. In addition, this paper analyses access control mechanism and RBAC model. The structure of RBAC model has been improved. The principle of group authority is added into the model and the combination of centralized authority and distributed authority management is adopted, so the model becomes more flexible.
Smart objects are small devices with limited system resources, typically made to fulfill a single simple task. By connecting smart objects and thus forming an Internet of Things, the devices can interact with each other and their users and support a new range of applications. Due to the limitations of smart objects, common security mechanisms are not easily applicable. Small message sizes and the lack of processing power severely limit the devices' ability to perform cryptographic operations. This paper introduces a protocol for delegating client authentication and authorization in a constrained environment. The protocol describes how to establish a secure channel based on symmetric cryptography between resource-constrained nodes in a cross-domain setting. A resource-constrained node can use this protocol to delegate authentication of communication peers and management of authorization information to a trusted host with less severe limitations regarding processing power and memory.
Cloud computing is a new paradigm and emerged technology for hosting and delivering resources over a network such as internet by using concepts of virtualization, processing power and storage. However, many challenging issues are still unclear in cloud-based environments and decrease the rate of reliability and efficiency for service providers and users. User Authentication is one of the most challenging issues in cloud-based environments and according to this issue this paper proposes an efficient user authentication model that involves both of defined phases during registration and accessing processes. Geo Detection and Digital Signature Authorization (GD2SA) is a user authentication tool for provisional access permission in cloud computing environments. The main aim of GD2SA is to compare the location of an un-registered device with the location of the user by using his belonging devices (e.g. smart phone). In addition, this authentication algorithm uses the digital signature of account owner to verify the identity of applicant. This model has been evaluated in this paper according to three main parameters: efficiency, scalability, and security. In overall, the theoretical analysis of the proposed model showed that it can increase the rate of efficiency and reliability in cloud computing as an emerging technology.
Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.
The success of the IoT world requires service provision attributed with ubiquity, reliability, high-performance, efficiency, and scalability. In order to accomplish this attribution, future business and research vision is to merge the Cloud Computing and IoT concepts, i.e., enable an “Everything as a Service” model: specifically, a Cloud ecosystem, encompassing novel functionality and cognitive-IoT capabilities, will be provided. Hence the paper will describe an innovative IoT centric Cloud smart infrastructure addressing individual IoT and Cloud Computing challenges.
In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.