Visible to the public Biblio

Found 2371 results

Filters: First Letter Of Last Name is G  [Clear All Filters]
2015-04-30
Cerqueira Ferreira, H.G., De Sousa, R.T., Gomes de Deus, F.E., Dias Canedo, E..  2014.  Proposal of a secure, deployable and transparent middleware for Internet of Things. Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on. :1-4.

This paper proposes a security architecture for an IoT transparent middleware. Focused on bringing real life objects to the virtual realm, the proposed architecture is deployable and comprises protection measures based on existent technologies for security such as AES, TLS and oAuth. This way, privacy, authenticity, integrity and confidentiality on data exchange services are integrated to provide security for generated smart objects and for involved users and services in a reliable and deployable manner.

Youngjung Ahn, Yongsuk Lee, Jin-Young Choi, Gyungho Lee, Dongkyun Ahn.  2014.  Monitoring Translation Lookahead Buffers to Detect Code Injection Attacks. Computer. 47:66-72.

By identifying memory pages that external I/O operations have modified, a proposed scheme blocks malicious injected code activation, accurately distinguishing an attack from legitimate code injection with negligible performance impact and no changes to the user application.

Aiash, M., Mapp, G., Gemikonakli, O..  2014.  Secure Live Virtual Machines Migration: Issues and Solutions. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :160-165.

In recent years, there has been a huge trend towards running network intensive applications, such as Internet servers and Cloud-based service in virtual environment, where multiple virtual machines (VMs) running on the same machine share the machine's physical and network resources. In such environment, the virtual machine monitor (VMM) virtualizes the machine's resources in terms of CPU, memory, storage, network and I/O devices to allow multiple operating systems running in different VMs to operate and access the network concurrently. A key feature of virtualization is live migration (LM) that allows transfer of virtual machine from one physical server to another without interrupting the services running in virtual machine. Live migration facilitates workload balancing, fault tolerance, online system maintenance, consolidation of virtual machines etc. However, live migration is still in an early stage of implementation and its security is yet to be evaluated. The security concern of live migration is a major factor for its adoption by the IT industry. Therefore, this paper uses the X.805 security standard to investigate attacks on live virtual machine migration. The analysis highlights the main source of threats and suggests approaches to tackle them. The paper also surveys and compares different proposals in the literature to secure the live migration.

Bian Yang, Huiguang Chu, Guoqiang Li, Petrovic, S., Busch, C..  2014.  Cloud Password Manager Using Privacy-Preserved Biometrics. Cloud Engineering (IC2E), 2014 IEEE International Conference on. :505-509.

Using one password for all web services is not secure because the leakage of the password compromises all the web services accounts, while using independent passwords for different web services is inconvenient for the identity claimant to memorize. A password manager is used to address this security-convenience dilemma by storing and retrieving multiple existing passwords using one master password. On the other hand, a password manager liberates human brain by enabling people to generate strong passwords without worry about memorizing them. While a password manager provides a convenient and secure way to managing multiple passwords, it centralizes the passwords storage and shifts the risk of passwords leakage from distributed service providers to a software or token authenticated by a single master password. Concerned about this one master password based security, biometrics could be used as a second factor for authentication by verifying the ownership of the master password. However, biometrics based authentication is more privacy concerned than a non-biometric password manager. In this paper we propose a cloud password manager scheme exploiting privacy enhanced biometrics, which achieves both security and convenience in a privacy-enhanced way. The proposed password manager scheme relies on a cloud service to synchronize all local password manager clients in an encrypted form, which is efficient to deploy the updates and secure against untrusted cloud service providers.

Sen, S., Guha, S., Datta, A., Rajamani, S.K., Tsai, J., Wing, J.M..  2014.  Bootstrapping Privacy Compliance in Big Data Systems. Security and Privacy (SP), 2014 IEEE Symposium on. :327-342.

With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of Big Data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.

Goldman, A.D., Uluagac, A.S., Copeland, J.A..  2014.  Cryptographically-Curated File System (CCFS): Secure, inter-operable, and easily implementable Information-Centric Networking. Local Computer Networks (LCN), 2014 IEEE 39th Conference on. :142-149.

Cryptographically-Curated File System (CCFS) proposed in this work supports the adoption of Information-Centric Networking. CCFS utilizes content names that span trust boundaries, verify integrity, tolerate disruption, authenticate content, and provide non-repudiation. Irrespective of the ability to reach an authoritative host, CCFS provides secure access by binding a chain of trust into the content name itself. Curators cryptographically bind content to a name, which is a path through a series of objects that map human meaningful names to cryptographically strong content identifiers. CCFS serves as a network layer for storage systems unifying currently disparate storage technologies. The power of CCFS derives from file hashes and public keys used as a name with which to retrieve content and as a method of verifying that content. We present results from our prototype implementation. Our results show that the overhead associated with CCFS is not negligible, but also is not prohibitive.

Godwin, J.L., Matthews, P..  2014.  Rapid labelling of SCADA data to extract transparent rules using RIPPER. Reliability and Maintainability Symposium (RAMS), 2014 Annual. :1-7.

This paper addresses a robust methodology for developing a statistically sound, robust prognostic condition index and encapsulating this index as a series of highly accurate, transparent, human-readable rules. These rules can be used to further understand degradation phenomena and also provide transparency and trust for any underlying prognostic technique employed. A case study is presented on a wind turbine gearbox, utilising historical supervisory control and data acquisition (SCADA) data in conjunction with a physics of failure model. Training is performed without failure data, with the technique accurately identifying gearbox degradation and providing prognostic signatures up to 5 months before catastrophic failure occurred. A robust derivation of the Mahalanobis distance is employed to perform outlier analysis in the bivariate domain, enabling the rapid labelling of historical SCADA data on independent wind turbines. Following this, the RIPPER rule learner was utilised to extract transparent, human-readable rules from the labelled data. A mean classification accuracy of 95.98% of the autonomously derived condition was achieved on three independent test sets, with a mean kappa statistic of 93.96% reported. In total, 12 rules were extracted, with an independent domain expert providing critical analysis, two thirds of the rules were deemed to be intuitive in modelling fundamental degradation behaviour of the wind turbine gearbox.

Fei Hao, Geyong Min, Man Lin, Changqing Luo, Yang, L.T..  2014.  MobiFuzzyTrust: An Efficient Fuzzy Trust Inference Mechanism in Mobile Social Networks. Parallel and Distributed Systems, IEEE Transactions on. 25:2944-2955.

Mobile social networks (MSNs) facilitate connections between mobile users and allow them to find other potential users who have similar interests through mobile devices, communicate with them, and benefit from their information. As MSNs are distributed public virtual social spaces, the available information may not be trustworthy to all. Therefore, mobile users are often at risk since they may not have any prior knowledge about others who are socially connected. To address this problem, trust inference plays a critical role for establishing social links between mobile users in MSNs. Taking into account the nonsemantical representation of trust between users of the existing trust models in social networks, this paper proposes a new fuzzy inference mechanism, namely MobiFuzzyTrust, for inferring trust semantically from one mobile user to another that may not be directly connected in the trust graph of MSNs. First, a mobile context including an intersection of prestige of users, location, time, and social context is constructed. Second, a mobile context aware trust model is devised to evaluate the trust value between two mobile users efficiently. Finally, the fuzzy linguistic technique is used to express the trust between two mobile users and enhance the human's understanding of trust. Real-world mobile dataset is adopted to evaluate the performance of the MobiFuzzyTrust inference mechanism. The experimental results demonstrate that MobiFuzzyTrust can efficiently infer trust with a high precision.

Kounelis, I., Baldini, G., Neisse, R., Steri, G., Tallacchini, M., Guimaraes Pereira, A..  2014.  Building Trust in the Human?Internet of Things Relationship Technology and Society Magazine, IEEE. 33:73-80.

Our vision in this paper is that agency, as the individual ability to intervene and tailor the system, is a crucial element in building trust in IoT technologies. Following up on this vision, we will first address the issue of agency, namely the individual capability to adopt free decisions, as a relevant driver in building trusted human-IoT relations, and how agency should be embedded in digital systems. Then we present the main challenges posed by existing approaches to implement this vision. We show then our proposal for a model-based approach that realizes the agency concept, including a prototype implementation.

Sridhar, S., Govindarasu, M..  2014.  Model-Based Attack Detection and Mitigation for Automatic Generation Control. Smart Grid, IEEE Transactions on. 5:580-591.

Cyber systems play a critical role in improving the efficiency and reliability of power system operation and ensuring the system remains within safe operating margins. An adversary can inflict severe damage to the underlying physical system by compromising the control and monitoring applications facilitated by the cyber layer. Protection of critical assets from electronic threats has traditionally been done through conventional cyber security measures that involve host-based and network-based security technologies. However, it has been recognized that highly skilled attacks can bypass these security mechanisms to disrupt the smooth operation of control systems. There is a growing need for cyber-attack-resilient control techniques that look beyond traditional cyber defense mechanisms to detect highly skilled attacks. In this paper, we make the following contributions. We first demonstrate the impact of data integrity attacks on Automatic Generation Control (AGC) on power system frequency and electricity market operation. We propose a general framework to the application of attack resilient control to power systems as a composition of smart attack detection and mitigation. Finally, we develop a model-based anomaly detection and attack mitigation algorithm for AGC. We evaluate the detection capability of the proposed anomaly detection algorithm through simulation studies. Our results show that the algorithm is capable of detecting scaling and ramp attacks with low false positive and negative rates. The proposed model-based mitigation algorithm is also efficient in maintaining system frequency within acceptable limits during the attack period.

Junho Hong, Chen-Ching Liu, Govindarasu, M..  2014.  Integrated Anomaly Detection for Cyber Security of the Substations. Smart Grid, IEEE Transactions on. 5:1643-1653.

Cyber intrusions to substations of a power grid are a source of vulnerability since most substations are unmanned and with limited protection of the physical security. In the worst case, simultaneous intrusions into multiple substations can lead to severe cascading events, causing catastrophic power outages. In this paper, an integrated Anomaly Detection System (ADS) is proposed which contains host- and network-based anomaly detection systems for the substations, and simultaneous anomaly detection for multiple substations. Potential scenarios of simultaneous intrusions into the substations have been simulated using a substation automation testbed. The host-based anomaly detection considers temporal anomalies in the substation facilities, e.g., user-interfaces, Intelligent Electronic Devices (IEDs) and circuit breakers. The malicious behaviors of substation automation based on multicast messages, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Measured Value (SMV), are incorporated in the proposed network-based anomaly detection. The proposed simultaneous intrusion detection method is able to identify the same type of attacks at multiple substations and their locations. The result is a new integrated tool for detection and mitigation of cyber intrusions at a single substation or multiple substations of a power grid.

Geva, M., Herzberg, A., Gev, Y..  2014.  Bandwidth Distributed Denial of Service: Attacks and Defenses. Security Privacy, IEEE. 12:54-61.

The Internet is vulnerable to bandwidth distributed denial-of-service (BW-DDoS) attacks, wherein many hosts send a huge number of packets to cause congestion and disrupt legitimate traffic. So far, BW-DDoS attacks have employed relatively crude, inefficient, brute force mechanisms; future attacks might be significantly more effective and harmful. To meet the increasing threats, we must deploy more advanced defenses.

Girma, Anteneh, Garuba, Moses, Goel, Rojini.  2014.  Cloud Computing Vulnerability: DDoS As Its Main Security Threat, and Analysis of IDS As a Solution Model. Proceedings of the 2014 11th International Conference on Information Technology: New Generations. :307–312.

Cloud computing has emerged as an increasingly popular means of delivering IT-enabled business services and a potential technology resource choice for many private and government organizations in today's rapidly changing computing environment. Consequently, as cloud computing technology, functionality and usability expands unique security vulnerabilities and treats requiring timely attention arise continuously. The primary challenge being providing continuous service availability. This paper will address cloud security vulnerability issues, the threats propagated by a distributed denial of service (DDOS) attack on cloud computing infrastructure and also discuss the means and techniques that could detect and prevent the attacks.

Grilo, A.M., Chen, J., Diaz, M., Garrido, D., Casaca, A..  2014.  An Integrated WSAN and SCADA System for Monitoring a Critical Infrastructure. Industrial Informatics, IEEE Transactions on. 10:1755-1764.

Wireless sensor and actuator networks (WSAN) constitute an emerging technology with multiple applications in many different fields. Due to the features of WSAN (dynamism, redundancy, fault tolerance, and self-organization), this technology can be used as a supporting technology for the monitoring of critical infrastructures (CIs). For decades, the monitoring of CIs has centered on supervisory control and data acquisition (SCADA) systems, where operators can monitor and control the behavior of the system. The reach of the SCADA system has been hampered by the lack of deployment flexibility of the sensors that feed it with monitoring data. The integration of a multihop WSAN with SCADA for CI monitoring constitutes a novel approach to extend the SCADA reach in a cost-effective way, eliminating this handicap. However, the integration of WSAN and SCADA presents some challenges which have to be addressed in order to comprehensively take advantage of the WSAN features. This paper presents a solution for this joint integration. The solution uses a gateway and a Web services approach together with a Web-based SCADA, which provides an integrated platform accessible from the Internet. A real scenario where this solution has been successfully applied to monitor an electrical power grid is presented.

2014-09-26
Bau, J., Bursztein, E., Gupta, D., Mitchell, J..  2010.  State of the Art: Automated Black-Box Web Application Vulnerability Testing. Security and Privacy (SP), 2010 IEEE Symposium on. :332-345.

Black-box web application vulnerability scanners are automated tools that probe web applications for security vulnerabilities. In order to assess the current state of the art, we obtained access to eight leading tools and carried out a study of: (i) the class of vulnerabilities tested by these scanners, (ii) their effectiveness against target vulnerabilities, and (iii) the relevance of the target vulnerabilities to vulnerabilities found in the wild. To conduct our study we used a custom web application vulnerable to known and projected vulnerabilities, and previous versions of widely used web applications containing known vulnerabilities. Our results show the promise and effectiveness of automated tools, as a group, and also some limitations. In particular, "stored" forms of Cross Site Scripting (XSS) and SQL Injection (SQLI) vulnerabilities are not currently found by many tools. Because our goal is to assess the potential of future research, not to evaluate specific vendors, we do not report comparative data or make any recommendations about purchase of specific tools.

Henry, R., Goldberg, I.  2011.  Formalizing Anonymous Blacklisting Systems. Security and Privacy (SP), 2011 IEEE Symposium on. :81-95.

Anonymous communications networks, such as Tor, help to solve the real and important problem of enabling users to communicate privately over the Internet. However, in doing so, anonymous communications networks introduce an entirely new problem for the service providers - such as websites, IRC networks or mail servers - with which these users interact, in particular, since all anonymous users look alike, there is no way for the service providers to hold individual misbehaving anonymous users accountable for their actions. Recent research efforts have focused on using anonymous blacklisting systems (which are sometimes called anonymous revocation systems) to empower service providers with the ability to revoke access from abusive anonymous users. In contrast to revocable anonymity systems, which enable some trusted third party to deanonymize users, anonymous blacklisting systems provide users with a way to authenticate anonymously with a service provider, while enabling the service provider to revoke access from any users that misbehave, without revealing their identities. In this paper, we introduce the anonymous blacklisting problem and survey the literature on anonymous blacklisting systems, comparing and contrasting the architecture of various existing schemes, and discussing the tradeoffs inherent with each design. The literature on anonymous blacklisting systems lacks a unified set of definitions, each scheme operates under different trust assumptions and provides different security and privacy guarantees. Therefore, before we discuss the existing approaches in detail, we first propose a formal definition for anonymous blacklisting systems, and a set of security and privacy properties that these systems should possess. We also outline a set of new performance requirements that anonymous blacklisting systems should satisfy to maximize their potential for real-world adoption, and give formal definitions for several optional features already supported by some sche- - mes in the literature.

Rossow, C., Dietrich, C.J., Grier, C., Kreibich, C., Paxson, V., Pohlmann, N., Bos, H., van Steen, M..  2012.  Prudent Practices for Designing Malware Experiments: Status Quo and Outlook. Security and Privacy (SP), 2012 IEEE Symposium on. :65-79.

Malware researchers rely on the observation of malicious code in execution to collect datasets for a wide array of experiments, including generation of detection models, study of longitudinal behavior, and validation of prior research. For such research to reflect prudent science, the work needs to address a number of concerns relating to the correct and representative use of the datasets, presentation of methodology in a fashion sufficiently transparent to enable reproducibility, and due consideration of the need not to harm others. In this paper we study the methodological rigor and prudence in 36 academic publications from 2006-2011 that rely on malware execution. 40% of these papers appeared in the 6 highest-ranked academic security conferences. We find frequent shortcomings, including problematic assumptions regarding the use of execution-driven datasets (25% of the papers), absence of description of security precautions taken during experiments (71% of the articles), and oftentimes insufficient description of the experimental setup. Deficiencies occur in top-tier venues and elsewhere alike, highlighting a need for the community to improve its handling of malware datasets. In the hope of aiding authors, reviewers, and readers, we frame guidelines regarding transparency, realism, correctness, and safety for collecting and using malware datasets.

2014-09-17
Parno, B., Howell, J., Gentry, C., Raykova, M..  2013.  Pinocchio: Nearly Practical Verifiable Computation. Security and Privacy (SP), 2013 IEEE Symposium on. :238-252.

To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio's verification time is typically 10ms: 5-7 orders of magnitude less than previous work; indeed Pinocchio is the first general-purpose system to demonstrate verification cheaper than native execution (for some apps). Pinocchio also reduces the worker's proof effort by an additional 19-60x. As an additional feature, Pinocchio generalizes to zero-knowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an end-to-end toolchain that compiles a subset of C into programs that implement the verifiable computation protocol.

Colbaugh, R., Glass, K..  2013.  Moving target defense for adaptive adversaries. Intelligence and Security Informatics (ISI), 2013 IEEE International Conference on. :50-55.

Machine learning (ML) plays a central role in the solution of many security problems, for example enabling malicious and innocent activities to be rapidly and accurately distinguished and appropriate actions to be taken. Unfortunately, a standard assumption in ML - that the training and test data are identically distributed - is typically violated in security applications, leading to degraded algorithm performance and reduced security. Previous research has attempted to address this challenge by developing ML algorithms which are either robust to differences between training and test data or are able to predict and account for these differences. This paper adopts a different approach, developing a class of moving target (MT) defenses that are difficult for adversaries to reverse-engineer, which in turn decreases the adversaries' ability to generate training/test data differences that benefit them. We leverage the coevolutionary relationship between attackers and defenders to derive a simple, flexible MT defense strategy which is optimal or nearly optimal for a broad range of security problems. Case studies involving two distinct cyber defense applications demonstrate that the proposed MT algorithm outperforms standard static methods, offering effective defense against intelligent, adaptive adversaries.

Tembe, Rucha, Zielinska, Olga, Liu, Yuqi, Hong, Kyung Wha, Murphy-Hill, Emerson, Mayhorn, Chris, Ge, Xi.  2014.  Phishing in International Waters: Exploring Cross-national Differences in Phishing Conceptualizations Between Chinese, Indian and American Samples. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :8:1–8:7.

One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions.

Schmerl, Bradley, Cámara, Javier, Gennari, Jeffrey, Garlan, David, Casanova, Paulo, Moreno, Gabriel A., Glazier, Thomas J., Barnes, Jeffrey M..  2014.  Architecture-based Self-protection: Composing and Reasoning About Denial-of-service Mitigations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :2:1–2:12.

Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.