Biblio
One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.
Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.
The inappropriate use of features intended to improve usability and interactivity of web applications has resulted in the emergence of various threats, including Cross-Site Scripting(XSS) attacks. In this work, we developed ETSS Detector, a generic and modular web vulnerability scanner that automatically analyzes web applications to find XSS vulnerabilities. ETSS Detector is able to identify and analyze all data entry points of the application and generate specific code injection tests for each one. The results shows that the correct filling of the input fields with only valid information ensures a better effectiveness of the tests, increasing the detection rate of XSS attacks.
Cell discontinuous transmission (DTX) is a new feature that enables sleep mode operations at base station (BS) side during the transmission time intervals when there is no traffic. In this letter, we analyze the maximum achievable energy saving of the cell DTX. We incorporate the cell DTX with a clean-slate network deployment and obtain optimal BS density for lowest energy consumption satisfying a certain quality of service requirement considering daily traffic variation. The numerical result indicates that the fast traffic adaptation capability of cell DTX favors dense network deployment with lightly loaded cells, which brings about considerable improvement in energy saving.
With the growing demand for increased spectral efficiencies, there has been renewed interest in enabling full-duplex communications. However, existing approaches to enable full-duplex require a clean-slate approach to address the key challenge in full-duplex, namely self-interference suppression. This serves as a big deterrent to enabling full-duplex in existing cellular networks. Towards our vision of enabling full-duplex in legacy cellular, specifically LTE networks, with no modifications to existing hardware at BS and client as well as technology specific industry standards, we present the design of our experimental system FD-LTE, that incorporates a combination of passive SI cancellation schemes, with legacy LTE half-duplex BS and client devices. We build a prototype of FD-LTE, integrate it with LTE's evolved packet core and conduct over-the-air experiments to explore the feasibility and potential for full-duplex with legacy LTE networks. We report promising experimental results from FD-LTE, which currently applies to scenarios with limited ranges that is typical of small cells.
Programming languages have long incorporated type safety, increasing their level of abstraction and thus aiding programmers. Type safety eliminates whole classes of security-sensitive bugs, replacing the tedious and error-prone search for such bugs in each application with verifying the correctness of the type system. Despite their benefits, these protections often end at the process boundary, that is, type safety holds within a program but usually not to the file system or communication with other programs. Existing operating system approaches to bridge this gap require the use of a single programming language or common language runtime. We describe the deep integration of type safety in Ethos, a clean-slate operating system which requires that all program input and output satisfy a recognizer before applications are permitted to further process it. Ethos types are multilingual and runtime-agnostic, and each has an automatically generated unique type identifier. Ethos bridges the type-safety gap between programs by (1) providing a convenient mechanism for specifying the types each program may produce or consume, (2) ensuring that each type has a single, distributed-system-wide recognizer implementation, and (3) inescapably enforcing these type constraints.
The emergence of new technologies, in addition with the popularization of mobile devices and wireless communication systems, demands a variety of requirements that current Internet is not able to comply adequately. In this scenario, the innovative information-centric Entity Title Architecture (ETArch), a Future Internet (FI) clean slate approach, was design to efficiently cope with the increasing demand of beyond-IP networking services. Nevertheless, despite all ETArch capabilities, it was not projected with reliable networking functions, which limits its operability in mobile multimedia networking, and will seriously restrict its scope in Future Internet scenarios. Therefore, our work extends ETArch mobility control with advanced quality-oriented mobility functions, to deploy mobility prediction, Point of Attachment (PoA) decision and handover setup meeting both session quality requirements of active session flows and current wireless quality conditions of neighbouring PoA candidates. The effectiveness of the proposed additions were confirmed through a preliminary evaluation carried out by MATLAB, in which we have considered distinct applications scenario, and showed that they were able to outperform the most relevant alternative solutions in terms of performance and quality of service.
Due to the high volume and velocity of big data, it is an effective option to store big data in the cloud, because the cloud has capabilities of storing big data and processing high volume of user access requests. Attribute-Based Encryption (ABE) is a promising technique to ensure the end-to-end security of big data in the cloud. However, the policy updating has always been a challenging issue when ABE is used to construct access control schemes. A trivial implementation is to let data owners retrieve the data and re-encrypt it under the new access policy, and then send it back to the cloud. This method incurs a high communication overhead and heavy computation burden on data owners. In this paper, we propose a novel scheme that enabling efficient access control with dynamic policy updating for big data in the cloud. We focus on developing an outsourced policy updating method for ABE systems. Our method can avoid the transmission of encrypted data and minimize the computation work of data owners, by making use of the previously encrypted data with old access policies. Moreover, we also design policy updating algorithms for different types of access policies. The analysis show that our scheme is correct, complete, secure and efficient.
Internet into our physical world and making it present everywhere. This evolution is also raising challenges in issues such as privacy, and security. For that reason, this work is focused on the integration and lightweight adaptation of existing authentication protocols, which are able also to offer authorization and access control functionalities. In particular, this work is focused on the Extensible Authentication Protocol (EAP). EAP is widely used protocol for access control in local area networks such Wireless (802.11) and wired (802.3). This work presents an integration of the EAP frame into IEEE 802.15.4 frames, demonstrating that EAP protocol and some of its mechanisms are feasible to be applied in constrained devices, such as the devices that are populating the IoT networks.
Cloud Computing delivers the service to the users by having reliable internet connection. In the secure cloud, services are stored and shared by multiple users because of less cost and data maintenance. Sharing the data is the vital intention of cloud data centres. On the other hand, storing the sensitive information is the privacy concern of the cloud. Cloud service provider has to protect the stored client's documents and applications in the cloud by encrypting the data to provide data integrity. Designing proficient document sharing among the group members in the cloud is the difficult task because of group user membership change and conserving document and group user identity confidentiality. To propose the fortified data sharing scheme in secret manner for providing efficient group revocation Advanced Encryption Standard scheme is used. Proposed System contributes efficient group authorization, authentication, confidentiality and access control and document security. To provide more data security Advanced Encryption Standard algorithm is used to encrypt the document. By asserting security and confidentiality in this proficient method securely share the document among the multiple cloud user.
Cloud computing is an application and set of services given through the internet. However it is an emerging technology for shared infrastructure but it lacks with an access rights and security mechanism. As it lacks security issues for the cloud users our system focuses only on the security provided through the token management system. It is based on the internet where computing is done through the virtual shared servers for providing infrastructure, software, platform and security as a services. In which security plays an important role in the cloud service. Hence, this security has been given with three types of services such as mutual authentication, directory services, token granting for the resources. Since, existing token issuing mechanism does not provide scalability to large data sets and also increases memory overhead between the client and the server. Hence, our proposed work focuses on providing tokens to the users, which addresses the problem of scalability and memory overhead. The proposed framework of token management system monitors the entire operations of the cloud and there by managing the entire cloud infrastructure. Our model comes under the new category of cloud model known as "Security as a Service". This paper provides the security framework as an architectural model to verify user authorization and data correctness of the resource stored thereby provides guarantee to the data owner for their resource stored into the cloud This framework also describes about the storage of token in a secured manner and it also facilitates search and usage of tokens for auditing purpose and supervision of the users.
Monitoring is an important issue in cloud environments because it assures that acquired cloud slices attend the user's expectations. However, these environments are multitenant and dynamic, requiring automation techniques to offload cloud administrators. In a previous work, we proposed FlexACMS: a framework to automate monitoring configuration related to cloud slices using multiple monitoring solutions. In this work, we enhanced FlexACMS to allow dynamic and automatic attribution of monitoring configuration tasks to servers without administrator intervention, which was not available in previous version. FlexACMS also considers the monitoring server load when attributing configuration tasks, which allows load balancing between monitoring servers. The evaluation showed that enhancements reduced FlexACMS response time up to 60% in comparison to previous version. The scalability evaluation of enhanced version demonstrated the feasibility of our approach in large scale cloud environments.
In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. A typical WSN topology that applies to most applications allows sensors to act as data sources that forward their measurements to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN. An adversary may employ traffic analysis techniques such as evidence theory to identify the BS based on network traffic flow even when the WSN implements conventional security mechanisms. This motivates a need for WSN operators to achieve improved BS anonymity to protect the identity, role, and location of the BS. Many traffic analysis countermeasures have been proposed in literature, but are typically evaluated based on data traffic only, without considering the effects of network synchronization on anonymity performance. In this paper we use evidence theory analysis to examine the effects of WSN synchronization on BS anonymity by studying two commonly used protocols, Reference Broadcast Synchronization (RBS) and Timing-synch Protocol for Sensor Networks (TPSN).
Wireless Sensor Networks (WSNs) are deployed to monitor the assets (endangered species) and report the locations of these assets to the Base Station (BS) also known as Sink. The hunter (adversary) attacks the network at one or two hops away from the Sink, eavesdrops the wireless communication links and traces back to the location of the asset to capture them. The existing solutions proposed to preserve the privacy of the assets lack in energy efficiency as they rely on random walk routing technique and fake packet injection technique so as to obfuscate the hunter from locating the assets. In this paper we present an energy efficient privacy preserved routing algorithm where the event (i.e., asset) detected nodes called as source nodes report the events' location information to the Base Station using phantom source (also known as phantom node) concept and a-angle anonymity concept. Routing is done using existing greedy routing protocol. Comparison through simulations shows that our solution reduces the energy consumption and delay while maintaining the same level of privacy as that of two existing popular techniques.
Smartphones are a new type of mobile devices that users can install additional mobile software easily. In the almost all smartphone applications, client-server model is used because end-to-end communication is prevented by NAT routers. Recently, some smartphone applications provide real time services such as voice and video communication, online games etc. In these applications, end-to-end communication is suitable to reduce transmission delay and achieve efficient network usage. Also, IP mobility and security are important matters. However, the conventional IP mobility mechanisms are not suitable for these applications because most mechanisms are assumed to be installed in OS kernel. We have developed a novel IP mobility mechanism called NTMobile (Network Traversal with Mobility). NTMobile supports end-to-end IP mobility in IPv4 and IPv6 networks, however, it is assumed to be installed in Linux kernel as with other technologies. In this paper, we propose a new type of end-to-end mobility platform that provides end-to-end communication, mobility, and also secure data exchange functions in the application layer for smartphone applications. In the platform, we use NTMobile, which is ported as the application program. Then, we extend NTMobile to be suitable for smartphone devices and to provide secure data exchange. Client applications can achieve secure end-to-end communication and secure data exchange by sharing an encryption key between clients. Users also enjoy IP mobility which is the main function of NTMobile in each application. Finally, we confirmed that the developed module can work on Android system and iOS system.
Effective Personalized Mobile Search Using KNN, implements an architecture to improve user's personalization effectiveness over large set of data maintaining security of the data. User preferences are gathered through clickthrough data. Clickthrough data obtained is sent to the server in encrypted form. Clickthrough data obtained is classified into content concepts and location concepts. To improve classification and minimize processing time, KNN(K Nearest Neighborhood) algorithm is used. Preferences identified(location and content) are merged to provide effective preferences to the user. System make use of four entropies to balance weight between content concepts and location concepts. System implements client server architecture. Role of client is to collect user queries and to maintain them in files for future reference. User preference privacy is ensured through privacy parameters and also through encryption techniques. Server is responsible to carry out the tasks like training, reranking of the search results obtained and the concept extraction. Experiments are carried out on Android based mobile. Results obtained through experiments show that system significantly gives improved results over previous algorithm for the large set of data maintaining security.
Smartphones are a new type of mobile devices that users can install additional mobile software easily. In the almost all smartphone applications, client-server model is used because end-to-end communication is prevented by NAT routers. Recently, some smartphone applications provide real time services such as voice and video communication, online games etc. In these applications, end-to-end communication is suitable to reduce transmission delay and achieve efficient network usage. Also, IP mobility and security are important matters. However, the conventional IP mobility mechanisms are not suitable for these applications because most mechanisms are assumed to be installed in OS kernel. We have developed a novel IP mobility mechanism called NTMobile (Network Traversal with Mobility). NTMobile supports end-to-end IP mobility in IPv4 and IPv6 networks, however, it is assumed to be installed in Linux kernel as with other technologies. In this paper, we propose a new type of end-to-end mobility platform that provides end-to-end communication, mobility, and also secure data exchange functions in the application layer for smartphone applications. In the platform, we use NTMobile, which is ported as the application program. Then, we extend NTMobile to be suitable for smartphone devices and to provide secure data exchange. Client applications can achieve secure end-to-end communication and secure data exchange by sharing an encryption key between clients. Users also enjoy IP mobility which is the main function of NTMobile in each application. Finally, we confirmed that the developed module can work on Android system and iOS system.
The electric network frequency (ENF) signal can be captured in multimedia recordings due to electromagnetic influences from the power grid at the time of recording. Recent work has exploited the ENF signals for forensic applications, such as authenticating and detecting forgery of ENF-containing multimedia signals, and inferring their time and location of creation. In this paper, we explore a new potential of ENF signals for automatic synchronization of audio and video. The ENF signal as a time-varying random process can be used as a timing fingerprint of multimedia signals. Synchronization of audio and video recordings can be achieved by aligning their embedded ENF signals. We demonstrate the proposed scheme with two applications: multi-view video synchronization and synchronization of historical audio recordings. The experimental results show the ENF based synchronization approach is effective, and has the potential to solve problems that are intractable by other existing methods.
Wireless Mesh Network (WMN) is a promising wireless network architecture having potential of last few miles connectivity. There has been considerable research work carried out on various issues like design, performance, security etc. in WMN. Due to increasing interest in WMN and use of smart devices with bandwidth hungry applications, WMN must be designed with objective of energy efficient communication. Goal of this paper is to summarize importance of energy efficiency in WMN. Various techniques to bring energy efficient solutions have also been reviewed.
This paper presents the application of fusion meth- ods to a visual surveillance scenario. The range of relevant features for re-identifying vehicles is discussed, along with the methods for fusing probabilistic estimates derived from these estimates. In particular, two statistical parametric fusion methods are considered: Bayesian Networks and the Dempster Shafer approach. The main contribution of this paper is the development of a metric to allow direct comparison of the benefits of the two methods. This is achieved by generalising the Kelly betting strategy to accommodate a variable total stake for each sample, subject to a fixed expected (mean) stake. This metric provides a method to quantify the extra information provided by the Dempster-Shafer method, in comparison to a Bayesian Fusion approach.
The performance of indirect trust computation models (based on recommendations) can be easily compromised due to the subjective and social-based prejudice of the provided recommendations. Eradicating the influence of such recommendation remains an important and challenging issue in indirect trust computation models. An effective model for indirect trust computation is proposed which is capable of identifying dishonest recommendations. Dishonest recommendations are identified by using deviation based detecting technique. The concept of measuring the credibility of recommendation (rather than credibility of recommender) using fuzzy inference engine is also proposed to determine the influence of each honest recommendation. The proposed model has been compared with other existing evolutionary recommendation models in this field, and it is shown that the model is more accurate in measuring the trustworthiness of unknown entity.
The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gate-level netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist.
Cloud federation is a future evolution of Cloud computing, where Cloud Service Providers (CSP) collaborate dynamically to share their virtual infrastructure for load balancing and meeting the Quality of Service during the demand spikes. Today, one of the major obstacles in adoption of federation is the lack of trust between Cloud providers participating in federation. In order to ensure the security of critical and sensitive data of customers, it is important to evaluate and establish the trust between Cloud providers, before redirecting the customer's requests from one provider to other provider. We are proposing a trust evaluation model and underlying protocol that will facilitate the cloud providers to evaluate the trustworthiness of each other and hence participate in federation to share their infrastructure in a trusted and reliable way.
Internet-scale software becomes more and more important as a mode to construct software systems when Internet is developing rapidly. Internet-scale software comprises a set of widely distributed software entities which are running in open, dynamic and uncontrollable Internet environment. There are several aspects impacting dependability of Internet-scale software, such as technical, organizational, decisional and human aspects. It is very important to evaluate dependability of Internet-scale software by integrating all the aspects and analyzing system architecture from the most foundational elements. However, it is lack of such an evaluation model. An evaluation model of dependability for Internet-scale software on the basis of Bayesian Networks is proposed in this paper. The structure of Internet-scale software is analyzed. An evaluating system of dependability for Internet-scale software is established. It includes static metrics, dynamic metrics, prior metrics and correction metrics. A process of trust attenuation based on assessment is proposed to integrate subjective trust factors and objective dependability factors which impact on system quality. In this paper, a Bayesian Network is build according to the structure analysis. A bottom-up method that use Bayesian reasoning to analyses and calculate entity dependability and integration dependability layer by layer is described. A unified dependability of the whole system is worked out and is corrected by objective data. The analysis of experiment in a real system proves that the model in this paper is capable of evaluating the dependability of Internet-scale software clearly and objectively. Moreover, it offers effective help to the design, development, deployment and assessment of Internet-scale software.
In this paper, an edit detection method for forensic audio analysis is proposed. It develops and improves a previous method through changes in the signal processing chain and a novel detection criterion. As with the original method, electrical network frequency (ENF) analysis is central to the novel edit detector, for it allows monitoring anomalous variations of the ENF related to audio edit events. Working in unsupervised manner, the edit detector compares the extent of ENF variations, centered at its nominal frequency, with a variable threshold that defines the upper limit for normal variations observed in unedited signals. The ENF variations caused by edits in the signal are likely to exceed the threshold providing a mechanism for their detection. The proposed method is evaluated in both qualitative and quantitative terms via two distinct annotated databases. Results are reported for originally noisy database signals as well as versions of them further degraded under controlled conditions. A comparative performance evaluation, in terms of equal error rate (EER) detection, reveals that, for one of the tested databases, an improvement from 7% to 4% EER is achieved, respectively, from the original to the new edit detection method. When the signals are amplitude clipped or corrupted by broadband background noise, the performance figures of the novel method follow the same profile of those of the original method.