Visible to the public Biblio

Found 2859 results

Filters: First Letter Of Last Name is H  [Clear All Filters]
2015-05-01
Pukkawanna, S., Hazeyama, H., Kadobayashi, Y., Yamaguchi, S..  2014.  Investigating the utility of S-transform for detecting Denial-of-Service and probe attacks. Information Networking (ICOIN), 2014 International Conference on. :282-287.

Denial-of-Service (DoS) and probe attacks are growing more modern and sophisticated in order to evade detection by Intrusion Detection Systems (IDSs) and to increase the potent threat to the availability of network services. Detecting these attacks is quite tough for network operators using misuse-based IDSs because they need to see through attackers and upgrade their IDSs by adding new accurate attack signatures. In this paper, we proposed a novel signal and image processing-based method for detecting network probe and DoS attacks in which prior knowledge of attacks is not required. The method uses a time-frequency representation technique called S-transform, which is an extension of Wavelet Transform, to reveal abnormal frequency components caused by attacks in a traffic signal (e.g., a time-series of the number of packets). Firstly, S-Transform converts the traffic signal to a two-dimensional image which describes time-frequency behavior of the traffic signal. The frequencies that behave abnormally are discovered as abnormal regions in the image. Secondly, Otsu's method is used to detect the abnormal regions and identify time that attacks occur. We evaluated the effectiveness of the proposed method with several network probe and DoS attacks such as port scans, packet flooding attacks, and a low-intensity DoS attack. The results clearly indicated that the method is effective for detecting the probe and DoS attack streams which were generated to real-world Internet.

Yuxi Liu, Hatzinakos, D..  2014.  Earprint: Transient Evoked Otoacoustic Emission for Biometrics. Information Forensics and Security, IEEE Transactions on. 9:2291-2301.

Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears.

Koga, H., Honjo, S..  2014.  A secret sharing scheme based on a systematic Reed-Solomon code and analysis of its security for a general class of sources. Information Theory (ISIT), 2014 IEEE International Symposium on. :1351-1355.

In this paper we investigate a secret sharing scheme based on a shortened systematic Reed-Solomon code. In the scheme L secrets S1, S2, ..., SLand n shares X1, X2, ..., Xn satisfy certain n - k + L linear equations. Security of such a ramp secret sharing scheme is analyzed in detail. We prove that this scheme realizes a (k; n)-threshold scheme for the case of L = 1 and a ramp (k, L, n)-threshold scheme for the case of 2 ≤ L ≤ k - 1 under a certain assumption on S1, S2, ..., SL.

Qingyi Chen, Hongwei Kang, Hua Zhou, Xingping Sun, Yong Shen, YunZhi Jin, Jun Yin.  2014.  Research on cloud computing complex adaptive agent. Service Systems and Service Management (ICSSSM), 2014 11th International Conference on. :1-4.

It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.

Achouri, A., Hlaoui, Y.B., Jemni Ben Ayed, L..  2014.  Institution Theory for Services Oriented Applications. Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International. :516-521.

In the present paper, we present our approach for the transformation of workflow applications based on institution theory. The workflow application is modeled with UML Activity Diagram(UML AD). Then, for a formal verification purposes, the graphical model will be translated to an Event-B specification. Institution theory will be used in two levels. First, we defined a local semantic for UML AD and Event B specification using a categorical description of each one. Second, we defined institution comorphism to link the two defined institutions. The theoretical foundations of our approach will be studied in the same mathematical framework since the use of institution theory. The resulted Event-B specification, after applying the transformation approach, will be used for the formal verification of functional proprieties and the verification of absences of problems such deadlock. Additionally, with the institution comorphism, we define a semantic correctness and coherence of the model transformation.

Hummel, M..  2014.  State-of-the-Art: A Systematic Literature Review on Agile Information Systems Development. System Sciences (HICSS), 2014 47th Hawaii International Conference on. :4712-4721.

Principles of agile information systems development (ISD) have attracted the interest of practice as well as research. The goal of this literature review is to validate, update and extend previous reviews in terms of the general state of research on agile ISD. Besides including categories such as the employed research methods and data collection techniques, the importance of theory is highlighted by evaluating the theoretical foundations and contributions of former studies. Since agile ISD is rooted in the IS as well as software engineering discipline, important outlets of both disciplines are included in the search process, resulting in 482 investigated papers. The findings show that quantitative studies and the theoretical underpinnings of agile ISD are lacking. Extreme Programming is still the most researched agile ISD method, and more efforts on Scrum are needed. In consequence, multiple research gaps that need further research attention are identified.

Cardoso, L.S., Massouri, A., Guillon, B., Ferrand, P., Hutu, F., Villemaud, G., Risset, T., Gorce, J.-M..  2014.  CorteXlab: A facility for testing cognitive radio networks in a reproducible environment. Cognitive Radio Oriented Wireless Networks and Communications (CROWNCOM), 2014 9th International Conference on. :503-507.


While many theoretical and simulation works have highlighted the potential gains of cognitive radio, several technical issues still need to be evaluated from an experimental point of view. Deploying complex heterogeneous system scenarios is tedious, time consuming and hardly reproducible. To address this problem, we have developed a new experimental facility, called CorteXlab, that allows complex multi-node cognitive radio scenarios to be easily deployed and tested by anyone in the world. Our objective is not to design new software defined radio (SDR) nodes, but rather to provide a comprehensive access to a large set of high performance SDR nodes. The CorteXlab facility offers a 167 m2 electromagnetically (EM) shielded room and integrates a set of 24 universal software radio peripherals (USRPs) from National Instruments, 18 PicoSDR nodes from Nutaq and 42 IoT-Lab wireless sensor nodes from Hikob. CorteXlab is built upon the foundations of the SensLAB testbed and is based the free and open-source toolkit GNU Radio. Automation in scenario deployment, experiment start, stop and results collection is performed by an experiment controller, called Minus. CorteXlab is in its final stages of development and is already capable of running test scenarios. In this contribution, we show that CorteXlab is able to easily cope with the usual issues faced by other testbeds providing a reproducible experiment environment for CR experimentation.
 

Baraldi, A., Boschetti, L., Humber, M.L..  2014.  Probability Sampling Protocol for Thematic and Spatial Quality Assessment of Classification Maps Generated From Spaceborne/Airborne Very High Resolution Images. Geoscience and Remote Sensing, IEEE Transactions on. 52:701-760.

To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: 1) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. 2) The inclusion probabilities must be: a) knowable for nonsampled units and b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne very high resolution images, where: 1) an original Categorical Variable Pair Similarity Index (proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and 2) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session, the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic Mapper (SIAM™) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps, and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAM™ by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAM™ pre-classification maps proposed in this contribution, together with OQIs claimed for SIAM™ by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAM™ software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems initiative and the QA4EO international guidelines.

Shigen Shen, Hongjie Li, Risheng Han, Vasilakos, A.V., Yihan Wang, Qiying Cao.  2014.  Differential Game-Based Strategies for Preventing Malware Propagation in Wireless Sensor Networks. Information Forensics and Security, IEEE Transactions on. 9:1962-1973.

Wireless sensor networks (WSNs) are prone to propagating malware because of special characteristics of sensor nodes. Considering the fact that sensor nodes periodically enter sleep mode to save energy, we develop traditional epidemic theory and construct a malware propagation model consisting of seven states. We formulate differential equations to represent the dynamics between states. We view the decision-making problem between system and malware as an optimal control problem; therefore, we formulate a malware-defense differential game in which the system can dynamically choose its strategies to minimize the overall cost whereas the malware intelligently varies its strategies over time to maximize this cost. We prove the existence of the saddle-point in the game. Further, we attain optimal dynamic strategies for the system and malware, which are bang-bang controls that can be conveniently operated and are suitable for sensor nodes. Experiments identify factors that influence the propagation of malware. We also determine that optimal dynamic strategies can reduce the overall cost to a certain extent and can suppress the malware propagation. These results support a theoretical foundation to limit malware in WSNs.

Sierla, S., Hurkala, M., Charitoudi, K., Chen-Wei Yang, Vyatkin, V..  2014.  Security risk analysis for smart grid automation. Industrial Electronics (ISIE), 2014 IEEE 23rd International Symposium on. :1737-1744.

The reliability theory used in the design of complex systems including electric grids assumes random component failures and is thus unsuited to analyzing security risks due to attackers that intentionally damage several components of the system. In this paper, a security risk analysis methodology is proposed consisting of vulnerability analysis and impact analysis. Vulnerability analysis is a method developed by security engineers to identify the attacks that are relevant for the system under study, and in this paper, the analysis is applied on the communications network topology of the electric grid automation system. Impact analysis is then performed through co-simulation of automation and the electric grid to assess the potential damage from the attacks. This paper makes an extensive review of vulnerability and impact analysis methods and relevant system modeling techniques from the fields of security and industrial automation engineering, with a focus on smart grid automation, and then applies and combines approaches to obtain a security risk analysis methodology. The methodology is demonstrated with a case study of fault location, isolation and supply restoration smart grid automation.

Chen, K.Y., Heckel-Jones, C.A.C., Maupin, N.G., Rubin, S.M., Bogdanor, J.M., Zhenyu Guo, Haimes, Y.Y..  2014.  Risk analysis of GPS-dependent critical infrastructure system of systems. Systems and Information Engineering Design Symposium (SIEDS), 2014. :316-321.

The Department of Energy seeks to modernize the U.S. electric grid through the SmartGrid initiative, which includes the use of Global Positioning System (GPS)-timing dependent electric phasor measurement units (PMUs) for continual monitoring and automated controls. The U.S. Department of Homeland Security is concerned with the associated risks of increased utilization of GPS timing in the electricity subsector, which could in turn affect a large number of electricity-dependent Critical Infrastructure (CI) sectors. Exploiting the vulnerabilities of GPS systems in the electricity subsector can result to large-scale and costly blackouts. This paper seeks to analyze the risks of increased dependence of GPS into the electric grid through the introduction of PMUs and provides a systems engineering perspective to the GPS-dependent System of Systems (S-o-S) created by the SmartGrid initiative. The team started by defining and modeling the S-o-S followed by usage of a risk analysis methodology to identify and measure risks and evaluate solutions to mitigating the effects of the risks. The team expects that the designs and models resulting from the study will prove useful in terms of determining both current and future risks to GPS-dependent CIs sectors along with the appropriate countermeasures as the United States moves towards a SmartGrid system.

Shipman, C.M., Hopkinson, K.M., Lopez, J..  2015.  Con-Resistant Trust for Improved Reliability in a Smart-Grid Special Protection System. Power Delivery, IEEE Transactions on. 30:455-462.

This paper applies a con-resistant trust mechanism to improve the performance of a communications-based special protection system to enhance its effectiveness and resiliency. Smart grids incorporate modern information technologies to increase reliability and efficiency through better situational awareness. However, with the benefits of this new technology come the added risks associated with threats and vulnerabilities to the technology and to the critical infrastructure it supports. The research in this paper uses con-resistant trust to quickly identify malicious or malfunctioning (untrustworthy) protection system nodes to mitigate instabilities. The con-resistant trust mechanism allows protection system nodes to make trust assessments based on the node's cooperative and defective behaviors. These behaviors are observed via frequency readings which are prediodically reported. The trust architecture is tested in experiments by comparing a simulated special protection system with a con-resistant trust mechanism to one without the mechanism via an analysis of the variance statistical model. Simulation results show promise for the proposed con-resistant trust mechanism.

Yihai Zhu, Jun Yan, Yufei Tang, Sun, Y.L., Haibo He.  2014.  Resilience Analysis of Power Grids Under the Sequential Attack. Information Forensics and Security, IEEE Transactions on. 9:2340-2354.

The modern society increasingly relies on electrical service, which also brings risks of catastrophic consequences, e.g., large-scale blackouts. In the current literature, researchers reveal the vulnerability of power grids under the assumption that substations/transmission lines are removed or attacked synchronously. In reality, however, it is highly possible that such removals can be conducted sequentially. Motivated by this idea, we discover a new attack scenario, called the sequential attack, which assumes that substations/transmission lines can be removed sequentially, not synchronously. In particular, we find that the sequential attack can discover many combinations of substation whose failures can cause large blackout size. Previously, these combinations are ignored by the synchronous attack. In addition, we propose a new metric, called the sequential attack graph (SAG), and a practical attack strategy based on SAG. In simulations, we adopt three test benchmarks and five comparison schemes. Referring to simulation results and complexity analysis, we find that the proposed scheme has strong performance and low complexity.

Yihai Zhu, Jun Yan, Yufei Tang, Yan Sun, Haibo He.  2014.  The sequential attack against power grid networks. Communications (ICC), 2014 IEEE International Conference on. :616-621.

The vulnerability analysis is vital for safely running power grids. The simultaneous attack, which applies multiple failures simultaneously, does not consider the time domain in applying failures, and is limited to find unknown vulnerabilities of power grid networks. In this paper, we discover a new attack scenario, called the sequential attack, in which the failures of multiple network components (i.e., links/nodes) occur at different time. The sequence of such failures can be carefully arranged by attackers in order to maximize attack performances. This attack scenario leads to a new angle to analyze and discover vulnerabilities of grid networks. The IEEE 39 bus system is adopted as test benchmark to compare the proposed attack scenario with the existing simultaneous attack scenario. New vulnerabilities are found. For example, the sequential failure of two links, e.g., links 26 and 39 in the test benchmark, can cause 80% power loss, whereas the simultaneous failure of them causes less than 10% power loss. In addition, the sequential attack is demonstrated to be statistically stronger than the simultaneous attack. Finally, several metrics are compared and discussed in terms of whether they can be used to sharply reduce the search space for identifying strong sequential attacks.

Jun Yan, Haibo He, Yan Sun.  2014.  Integrated Security Analysis on Cascading Failure in Complex Networks. Information Forensics and Security, IEEE Transactions on. 9:451-463.

The security issue of complex networks has drawn significant concerns recently. While pure topological analyzes from a network security perspective provide some effective techniques, their inability to characterize the physical principles requires a more comprehensive model to approximate failure behavior of a complex network in reality. In this paper, based on an extended topological metric, we proposed an approach to examine the vulnerability of a specific type of complex network, i.e., the power system, against cascading failure threats. The proposed approach adopts a model called extended betweenness that combines network structure with electrical characteristics to define the load of power grid components. By using this power transfer distribution factor-based model, we simulated attacks on different components (buses and branches) in the grid and evaluated the vulnerability of the system components with an extended topological cascading failure simulator. Influence of different loading and overloading situations on cascading failures was also evaluated by testing different tolerance factors. Simulation results from a standard IEEE 118-bus test system revealed the vulnerability of network components, which was then validated on a dc power flow simulator with comparisons to other topological measurements. Finally, potential extensions of the approach were also discussed to exhibit both utility and challenge in more complex scenarios and applications.

Hong Liu, Huansheng Ning, Yan Zhang, Qingxu Xiong, Yang, L.T..  2014.  Role-Dependent Privacy Preservation for Secure V2G Networks in the Smart Grid. Information Forensics and Security, IEEE Transactions on. 9:208-220.

Vehicle-to-grid (V2G), involving both charging and discharging of battery vehicles (BVs), enhances the smart grid substantially to alleviate peaks in power consumption. In a V2G scenario, the communications between BVs and power grid may confront severe cyber security vulnerabilities. Traditionally, authentication mechanisms are solely designed for the BVs when they charge electricity as energy customers. In this paper, we first show that, when a BV interacts with the power grid, it may act in one of three roles: 1) energy demand (i.e., a customer); 2) energy storage; and 3) energy supply (i.e., a generator). In each role, we further demonstrate that the BV has dissimilar security and privacy concerns. Hence, the traditional approach that only considers BVs as energy customers is not universally applicable for the interactions in the smart grid. To address this new security challenge, we propose a role-dependent privacy preservation scheme (ROPS) to achieve secure interactions between a BV and power grid. In the ROPS, a set of interlinked subprotocols is proposed to incorporate different privacy considerations when a BV acts as a customer, storage, or a generator. We also outline both centralized and distributed discharging operations when a BV feeds energy back into the grid. Finally, security analysis is performed to indicate that the proposed ROPS owns required security and privacy properties and can be a highly potential security solution for V2G networks in the smart grid. The identified security challenge as well as the proposed ROPS scheme indicates that role-awareness is crucial for secure V2G networks.

2015-04-30
Hua Chai, Wenbing Zhao.  2014.  Towards trustworthy complex event processing. Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on. :758-761.

Complex event processing has become an important technology for big data and intelligent computing because it facilitates the creation of actionable, situational knowledge from potentially large amount events in soft realtime. Complex event processing can be instrumental for many mission-critical applications, such as business intelligence, algorithmic stock trading, and intrusion detection. Hence, the servers that carry out complex event processing must be made trustworthy. In this paper, we present a threat analysis on complex event processing systems and describe a set of mechanisms that can be used to control various threats. By exploiting the application semantics for typical event processing operations, we are able to design lightweight mechanisms that incur minimum runtime overhead appropriate for soft realtime computing.

Hemalatha, A., Venkatesh, R..  2014.  Redundancy management in heterogeneous wireless sensor networks. Communications and Signal Processing (ICCSP), 2014 International Conference on. :1849-1853.

A Wireless sensor network is a special type of Ad Hoc network, composed of a large number of sensor nodes spread over a wide geographical area. Each sensor node has the wireless communication capability and sufficient intelligence for making signal processing and dissemination of data from the collecting center .In this paper deals about redundancy management for improving network efficiency and query reliability in heterogeneous wireless sensor networks. The proposed scheme deals about finding a reliable path by using redundancy management algorithm and detection of unreliable nodes by discarding the path. The redundancy management algorithm finds the reliable path based on redundancy level, average distance between a source node and destination node and analyzes the redundancy level as the path and source redundancy. For finding the path from source CH to processing center we propose intrusion tolerance in the presence of unreliable nodes. Finally we applied our analyzed result to redundancy management algorithm to find the reliable path in which the network efficiency and Query success probability will be improved.

Miller, Andrew, Hicks, Michael, Katz, Jonathan, Shi, Elaine.  2014.  Authenticated Data Structures, Generically. SIGPLAN Not.. 49:411–423.

An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations.

This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.

Han, Lansheng, Qian, Mengxiao, Xu, Xingbo, Fu, Cai, Kwisaba, Hamza.  2014.  Malicious code detection model based on behavior association. Tsinghua Science and Technology. 19:508-515.

Malicious applications can be introduced to attack users and services so as to gain financial rewards, individuals' sensitive information, company and government intellectual property, and to gain remote control of systems. However, traditional methods of malicious code detection, such as signature detection, behavior detection, virtual machine detection, and heuristic detection, have various weaknesses which make them unreliable. This paper presents the existing technologies of malicious code detection and a malicious code detection model is proposed based on behavior association. The behavior points of malicious code are first extracted through API monitoring technology and integrated into the behavior; then a relation between behaviors is established according to data dependence. Next, a behavior association model is built up and a discrimination method is put forth using pushdown automation. Finally, the exact malicious code is taken as a sample to carry out an experiment on the behavior's capture, association, and discrimination, thus proving that the theoretical model is viable.

Mashima, D., Herberg, U., Wei-Peng Chen.  2014.  Enhancing Demand Response signal verification in automated Demand Response systems. Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES. :1-5.

Demand Response (DR) is a promising technology for meeting the world's ever increasing energy demands without corresponding increase in energy generation, and for providing a sustainable alternative for integrating renewables into the power grid. As a result, interest in automated DR is increasing globally and has led to the development of OpenADR, an internationally recognized standard. In this paper, we propose security-enhancement mechanisms to provide DR participants with verifiable information that they can use to make informed decisions about the validity of received DR event information.

Hassen, H., Khemakhem, M..  2014.  A secured distributed OCR system in a pervasive environment with authentication as a service in the Cloud. Multimedia Computing and Systems (ICMCS), 2014 International Conference on. :1200-1205.

In this paper we explore the potential for securing a distributed Arabic Optical Character Recognition (OCR) system via cloud computing technology in a pervasive and mobile environment. The goal of the system is to achieve full accuracy, high speed and security when taking into account large vocabularies and amounts of documents. This issue has been resolved by integrating the recognition process and the security issue with multiprocessing and distributed computing technologies.

Chiang, R., Rajasekaran, S., Zhang, N., Huang, H..  2014.  Swiper: Exploiting Virtual Machine Vulnerability in Third-Party Clouds with Competition for I/O Resources. Parallel and Distributed Systems, IEEE Transactions on. PP:1-1.

The emerging paradigm of cloud computing, e.g., Amazon Elastic Compute Cloud (EC2), promises a highly flexible yet robust environment for large-scale applications. Ideally, while multiple virtual machines (VM) share the same physical resources (e.g., CPUs, caches, DRAM, and I/O devices), each application should be allocated to an independently managed VM and isolated from one another. Unfortunately, the absence of physical isolation inevitably opens doors to a number of security threats. In this paper, we demonstrate in EC2 a new type of security vulnerability caused by competition between virtual I/O workloads-i.e., by leveraging the competition for shared resources, an adversary could intentionally slow down the execution of a targeted application in a VM that shares the same hardware. In particular, we focus on I/O resources such as hard-drive throughput and/or network bandwidth-which are critical for data-intensive applications. We design and implement Swiper, a framework which uses a carefully designed workload to incur significant delays on the targeted application and VM with minimum cost (i.e., resource consumption). We conduct a comprehensive set of experiments in EC2, which clearly demonstrates that Swiper is capable of significantly slowing down various server applications while consuming a small amount of resources.

Mianxiong Dong, He Lit, Ota, K., Haojin Zhu.  2014.  HVSTO: Efficient privacy preserving hybrid storage in cloud data center. Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. :529-534.

In cloud data center, shared storage with good management is a main structure used for the storage of virtual machines (VM). In this paper, we proposed Hybrid VM storage (HVSTO), a privacy preserving shared storage system designed for the virtual machine storage in large-scale cloud data center. Unlike traditional shared storage, HVSTO adopts a distributed structure to preserve privacy of virtual machines, which are a threat in traditional centralized structure. To improve the performance of I/O latency in this distributed structure, we use a hybrid system to combine solid state disk and distributed storage. From the evaluation of our demonstration system, HVSTO provides a scalable and sufficient throughput for the platform as a service infrastructure.

Bian Yang, Huiguang Chu, Guoqiang Li, Petrovic, S., Busch, C..  2014.  Cloud Password Manager Using Privacy-Preserved Biometrics. Cloud Engineering (IC2E), 2014 IEEE International Conference on. :505-509.

Using one password for all web services is not secure because the leakage of the password compromises all the web services accounts, while using independent passwords for different web services is inconvenient for the identity claimant to memorize. A password manager is used to address this security-convenience dilemma by storing and retrieving multiple existing passwords using one master password. On the other hand, a password manager liberates human brain by enabling people to generate strong passwords without worry about memorizing them. While a password manager provides a convenient and secure way to managing multiple passwords, it centralizes the passwords storage and shifts the risk of passwords leakage from distributed service providers to a software or token authenticated by a single master password. Concerned about this one master password based security, biometrics could be used as a second factor for authentication by verifying the ownership of the master password. However, biometrics based authentication is more privacy concerned than a non-biometric password manager. In this paper we propose a cloud password manager scheme exploiting privacy enhanced biometrics, which achieves both security and convenience in a privacy-enhanced way. The proposed password manager scheme relies on a cloud service to synchronize all local password manager clients in an encrypted form, which is efficient to deploy the updates and secure against untrusted cloud service providers.