Biblio
Cloud technologies are increasingly important for IT department for allowing them to concentrate on strategy as opposed to maintaining data centers; the biggest advantages of the cloud is the ability to share computing resources between multiple providers, especially hybrid clouds, in overcoming infrastructure limitations. User identity federation is considered as the second major risk in the cloud, and since business organizations use multiple cloud service providers, IT department faces a range of constraints. Multiple attempts to solve this problem have been suggested like federated Identity, which has a number of advantages, despite it suffering from challenges that are common in new technologies. The following paper tackles federated identity, its components, advantages, disadvantages, and then proposes a number of useful scenarios to manage identity in hybrid clouds infrastructure.
Distributed Denial of Service (DDoS) attacks are one of the challenging network security problems to address. The existing defense mechanisms against DDoS attacks usually filter the attack traffic at the victim side. The problem is exacerbated when there are spoofed IP addresses in the attack packets. In this case, even if the attacking traffic can be filtered by the victim, the attacker may reach the goal of blocking the access to the victim by consuming the computing resources or by consuming a big portion of the bandwidth to the victim. This paper proposes a Trace back-based Defense against DDoS Flooding Attacks (TDFA) approach to counter this problem. TDFA consists of three main components: Detection, Trace back, and Traffic Control. In this approach, the goal is to place the packet filtering as close to the attack source as possible. In doing so, the traffic control component at the victim side aims to set up a limit on the packet forwarding rate to the victim. This mechanism effectively reduces the rate of forwarding the attack packets and therefore improves the throughput of the legitimate traffic. Our results based on real world data sets show that TDFA is effective to reduce the attack traffic and to defend the quality of service for the legitimate traffic.
The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers “house of resources” includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers “house of resources” includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
An abnormal behavior detection algorithm for surveillance is required to correctly identify the targets as being in a normal or chaotic movement. A model is developed here for this purpose. The uniqueness of this algorithm is the use of foreground detection with Gaussian mixture (FGMM) model before passing the video frames to optical flow model using Lucas-Kanade approach. Information of horizontal and vertical displacements and directions associated with each pixel for object of interest is extracted. These features are then fed to feed forward neural network for classification and simulation. The study is being conducted on the real time videos and some synthesized videos. Accuracy of method has been calculated by using the performance parameters for Neural Networks. In comparison of plain optical flow with this model, improved results have been obtained without noise. Classes are correctly identified with an overall performance equal to 3.4e-02 with & error percentage of 2.5.
Due to the development of cloud computing and NoSQL database, more and more sensitive information are stored in NoSQL databases, which exposes quite a lot security vulnerabilities. This paper discusses security features of MongoDB database and proposes a transparent middleware implementation. The analysis of experiment results show that this transparent middleware can efficiently encrypt sensitive data specified by users on a dataset level. Existing application systems do not need too many modifications in order to apply this middleware.
With the rapid development of Wireless Sensor Networks (WSNs), besides the energy efficient, Quality of Service (QoS) supported and the validity of packet transmission should be considered under some circumstances. In this paper, according to summing up LEACH protocol's advantages and defects, combining with trust evaluation mechanism, energy and QoS control, a trust-based QoS routing algorithm is put forward. Firstly, energy control and coverage scale are adopted to keep load balance in the phase of cluster head selection. Secondly, trust evaluation mechanism is designed to increase the credibility of the network in the stage of node clusting. Finally, in the period of information transmission, verification and ACK mechanism also put to guarantee validity of data transmission. In this paper, it proposes the improved protocol. The improved protocol can not only prolong nodes' life expectancy, but also increase the credibility of information transmission and reduce the packet loss. Compared to typical routing algorithms in sensor networks, this new algorithm has better performance.
The integration of social networking concepts into the Internet of things has led to the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners with the benefits of improving the network scalability in information/service discovery. Within this scenario, we focus on the problem of understanding how the information provided by members of the social IoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define two models for trustworthiness management starting from the solutions proposed for P2P and social networks. In the subjective model each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the friends in common with the potential service providers. In the objective model, the information about each node is distributed and stored making use of a distributed hash table structure so that any node can make use of the same information. Simulations show how the proposed models can effectively isolate almost any malicious nodes in the network at the expenses of an increase in the network traffic for feedback exchange.
In this paper, we propose a scheme to employ an asymmetric fingerprinting protocol within a client-side embedding distribution framework. The scheme is based on a novel client-side embedding technique that is able to transmit a binary fingerprint. This enables secure distribution of personalized decryption keys containing the Buyer's fingerprint by means of existing asymmetric protocols, without using a trusted third party. Simulation results show that the fingerprint can be reliably recovered by using non-blind decoding, and it is robust with respect to common attacks. The proposed scheme can be a valid solution to both customer's rights and scalability issues in multimedia content distribution.
Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.
Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.
Recently, there has been a pronounced increase of interest in the field of renewable energy. In this area power inverters are crucial building blocks in a segment of energy converters, since they change direct current (DC) to alternating current (AC). Grid connected power inverters should operate in synchronism with the grid voltage. In this paper, the structure of a power system based on adaptive filtering is described. The main purpose of the adaptive filter is to adapt the output signal of the inverter to the corresponding load and/or grid signal. By involving adaptive filtering the response time decreases and quality of power delivery to the load or grid increases. A comparative analysis which relates to power system operation without and with adaptive filtering is given. In addition, the impact of variable impedance of load on quality of delivered power is considered. Results which relates to total harmonic distortion (THD) factor are obtained by Matlab/Simulink software.
To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
In this paper, we present a formal model for the verification of the DNSsec Protocol in the interactive theorem prover Isabelle/HOL. Relying on the inductive approach to security protocol verification, this formal analysis provides a more expressive representation than the widely accepted model checking analysis. Our mechanized model allows to represent the protocol, all its possible traces and the attacker and his knowledge. The fine grained model allows to show origin authentication, and replay attack prevention. Most prominently, we succeed in expressing Delegation Signatures and proving their authenticity formally.
Today’s quality of life is highly dependent on the successful operation of many large-scale industrial control systems. To enhance their protection against cyber-attacks and operational errors, we develop a simulation-based verification framework with cross-layer verification techniques that allow comprehensive analysis of the entire ICS-specific stack, including application, protocol, and network layers.
Work in progress paper.
Anonymous communications are important for many of the applications of mobile ad hoc networks (MANETs) deployed in adversary environments. A major requirement on the network is the ability to provide unidentifiability and unlinkability for mobile nodes and their traffic. Although a number of anonymous secure routing protocols have been proposed, the requirement is not fully satisfied. The existing protocols are vulnerable to the attacks of fake routing packets or denial-of-service broadcasting, even the node identities are protected by pseudonyms. In this paper, we propose a new routing protocol, i.e., authenticated anonymous secure routing (AASR), to satisfy the requirement and defend against the attacks. More specifically, the route request packets are authenticated by a group signature, to defend against potential active attacks without unveiling the node identities. The key-encrypted onion routing with a route secret verification message is designed to prevent intermediate nodes from inferring a real destination. Simulation results have demonstrated the effectiveness of the proposed AASR protocol with improved performance as compared with the existing protocols.
Although there has been much research on the leakage of sensitive data in Android applications, most of the existing research focus on how to detect the malware or adware that are intentionally collecting user privacy. There are not much research on analyzing the vulnerabilities of apps that may cause the leakage of privacy. In this paper, we present a vulnerability analyzing method which combines taint analysis and cryptography misuse detection. The four steps of this method are decompile, taint analysis, API call record, cryptography misuse analysis, all of which steps except taint analysis can be executed by the existing tools. We develop a prototype tool PW Exam to analysis how the passwords are handled and if the app is vulnerable to password leakage. Our experiment shows that a third of apps are vulnerable to leak the users' passwords.
Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.
A robust adaptive filtering algorithm based on the convex combination of two adaptive filters under the maximum correntropy criterion (MCC) is proposed. Compared with conventional minimum mean square error (MSE) criterion-based adaptive filtering algorithm, the MCC-based algorithm shows a better robustness against impulsive interference. However, its major drawback is the conflicting requirements between convergence speed and steady-state mean square error. In this letter, we use the convex combination method to overcome the tradeoff problem. Instead of minimizing the squared error to update the mixing parameter in conventional convex combination scheme, the method of maximizing the correntropy is introduced to make the proposed algorithm more robust against impulsive interference. Additionally, we report a novel weight transfer method to further improve the tracking performance. The good performance in terms of convergence rate and steady-state mean square error is demonstrated in plant identification scenarios that include impulsive interference and abrupt changes.
The paper presents a secure solution that provides VoIP service for mobile users, handling both pre-call and mid-call mobility. Pre-call mobility is implemented using a presence server that acts as a DNS for the moving users. Our approach also detects any change in the attachment point of the moving users and transmits it to the peer entity by in band signaling using socket communications. For true mid-call mobility we also employ buffering techniques that store packets for the duration of the signaling procedure. The solution was implemented for Android devices and it uses ASP technology for the server part.
Wireless sensor networks (WSNs) are prone to propagating malware because of special characteristics of sensor nodes. Considering the fact that sensor nodes periodically enter sleep mode to save energy, we develop traditional epidemic theory and construct a malware propagation model consisting of seven states. We formulate differential equations to represent the dynamics between states. We view the decision-making problem between system and malware as an optimal control problem; therefore, we formulate a malware-defense differential game in which the system can dynamically choose its strategies to minimize the overall cost whereas the malware intelligently varies its strategies over time to maximize this cost. We prove the existence of the saddle-point in the game. Further, we attain optimal dynamic strategies for the system and malware, which are bang-bang controls that can be conveniently operated and are suitable for sensor nodes. Experiments identify factors that influence the propagation of malware. We also determine that optimal dynamic strategies can reduce the overall cost to a certain extent and can suppress the malware propagation. These results support a theoretical foundation to limit malware in WSNs.
Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.
Blind Source Separation (BSS) deals with the recovery of source signals from a set of observed mixtures, when little or no knowledge of the mixing process is available. BSS can find an application in the context of network coding, where relaying linear combinations of packets maximizes the throughput and increases the loss immunity. By relieving the nodes from the need to send the combination coefficients, the overhead cost is largely reduced. However, the scaling ambiguity of the technique and the quasi-uniformity of compressed media sources makes it unfit, at its present state, for multimedia transmission. In order to open new practical applications for BSS in the context of multimedia transmission, we have recently proposed to use a non-linear encoding to increase the discriminating power of the classical entropy-based separation methods. Here, we propose to append to each source a non-linear message digest, which offers an overhead smaller than a per-symbol encoding and that can be more easily tuned. Our results prove that our algorithm is able to provide high decoding rates for different media types such as image, audio, and video, when the transmitted messages are less than 1.5 kilobytes, which is typically the case in a realistic transmission scenario.
In 2013, Biswas and Misic proposed a new privacy-preserving authentication scheme for WAVE-based vehicular ad hoc networks (VANETs), claiming that they used a variant of the Elliptic Curve Digital Signature Algorithm (ECDSA). However, our study has discovered that the authentication scheme proposed by them is vulnerable to a private key reveal attack. Any malicious receiving vehicle who receives a valid signature from a legal signing vehicle can gain access to the signing vehicle private key from the learned valid signature. Hence, the authentication scheme proposed by Biswas and Misic is insecure. We thus propose an improved version to overcome this weakness. The proposed improved scheme also supports identity revocation and trace. Based on this security property, the CA and a receiving entity (RSU or OBU) can check whether a received signature has been generated by a revoked vehicle. Security analysis is also conducted to evaluate the security strength of the proposed authentication scheme.