Visible to the public Biblio

Found 201 results

Filters: Keyword is policy  [Clear All Filters]
2018-01-23
Alasad, Qutaiba, Yuan, Jiann, Fan, Deliang.  2017.  Leveraging All-Spin Logic to Improve Hardware Security. Proceedings of the on Great Lakes Symposium on VLSI 2017. :491–494.

Due to the globalization of Integrated Circuit (IC) design in the semiconductor industry and the outsourcing of chip manufacturing, third Party Intellectual Properties (3PIPs) become vulnerable to IP piracy, reverse engineering, counterfeit IC, and hardware trojans. A designer has to employ a strong technique to thwart such attacks, e.g. using Strong Logic Locking method [1]. But, such technique cannot be used to protect some circuits since the inserted key-gates rely on the topology of the circuit. Also, it requires higher power, delay, and area overheads compared to other techniques. In this paper, we present the use of spintronic devices to help protect ICs with less performance overhead. We then evaluate the proposed design based on security metric and performance overhead. One of the best spintronic device candidates is the All Spin Logic due to its unique properties: small area, no spin-charge signal conversion, and its compatibility with conventional CMOS technology.

Amir, Sarah, Shakya, Bicky, Forte, Domenic, Tehranipoor, Mark, Bhunia, Swarup.  2017.  Comparative Analysis of Hardware Obfuscation for IP Protection. Proceedings of the on Great Lakes Symposium on VLSI 2017. :363–368.

In the era of globalized Integrated Circuit (IC) design and manufacturing flow, a rising issue to the silicon industry is various attacks on hardware intellectual property (IP). As a measure to ensure security along the supply chain against IP piracy, tampering and reverse engineering, hardware obfuscation is considered a reliable defense mechanism. Sequential and combinational obfuscations are the primary classes of obfuscation, and multiple methods have been proposed in each type in recent years. This paper presents an overview of obfuscation techniques and a qualitative comparison of the two major types.

Chhotaray, Animesh, Nahiyan, Adib, Shrimpton, Thomas, Forte, Domenic, Tehranipoor, Mark.  2017.  Standardizing Bad Cryptographic Practice: A Teardown of the IEEE Standard for Protecting Electronic-design Intellectual Property. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1533–1546.

We provide an analysis of IEEE standard P1735, which describes methods for encrypting electronic-design intellectual property (IP), as well as the management of access rights for such IP. We find a surprising number of cryptographic mistakes in the standard. In the most egregious cases, these mistakes enable attack vectors that allow us to recover the entire underlying plaintext IP. Some of these attack vectors are well-known, e.g. padding-oracle attacks. Others are new, and are made possible by the need to support the typical uses of the underlying IP; in particular, the need for commercial system-on-chip (SoC) tools to synthesize multiple pieces of IP into a fully specified chip design and to provide syntax errors. We exploit these mistakes in a variety of ways, leveraging a commercial SoC tool as a black-box oracle. In addition to being able to recover entire plaintext IP, we show how to produce standard-compliant ciphertexts of IP that have been modified to include targeted hardware Trojans. For example, IP that correctly implements the AES block cipher on all but one (arbitrary) plaintext that induces the block cipher to return the secret key. We outline a number of other attacks that the standard allows, including on the cryptographic mechanism for IP licensing. Unfortunately, we show that obvious "quick fixes" to the standard (and the tools that support it) do not stop all of our attacks. This suggests that the standard requires a significant overhaul, and that IP-authors using P1735 encryption should consider themselves at risk.

Di Crescenzo, Giovanni, Rajendran, Jeyavijayan, Karri, Ramesh, Memon, Nasir.  2017.  Boolean Circuit Camouflage: Cryptographic Models, Limitations, Provable Results and a Random Oracle Realization. Proceedings of the 2017 Workshop on Attacks and Solutions in Hardware Security. :7–16.

Recent hardware advances, called gate camouflaging, have opened the possibility of protecting integrated circuits against reverse-engineering attacks. In this paper, we investigate the possibility of provably boosting the capability of physical camouflaging of a single Boolean gate into physical camouflaging of a larger Boolean circuit. We first propose rigorous definitions, borrowing approaches from modern cryptography and program obfuscation areas, for circuit camouflage. Informally speaking, gate camouflaging is defined as a transformation of a physical gate that appears to mask the gate to an attacker evaluating the circuit containing this gate. Under this assumption, we formally prove two results: a limitation and a construction. Our limitation result says that there are circuits for which, no matter how many gates we camouflaged, an adversary capable of evaluating the circuit will correctly guess all the camouflaged gates. Our construction result says that if pseudo-random functions exist (a common assumptions in cryptography), a small number of camouflaged gates suffices to: (a) leak no additional information about the camouflaged gates to an adversary evaluating the pseudo-random function circuit; and (b) turn these functions into random oracles. These latter results are the first results on circuit camouflaging provable in a cryptographic model (previously, construction were given under no formal model, and were eventually reverse-engineered, or were argued secure under specific classes of attacks). Our results imply a concrete and provable realization of random oracles, which, even if under a hardware-based assumption, is applicable in many scenarios, including public-key infrastructures. Finding special conditions under which provable realizations of random oracles has been an open problem for many years, since a software only provable implementation of random oracles was proved to be (almost certainly) impossible.

2018-01-10
Wu, Xiaotong, Dou, Wanchun, Ni, Qiang.  2017.  Game Theory Based Privacy Preserving Analysis in Correlated Data Publication. Proceedings of the Australasian Computer Science Week Multiconference. :73:1–73:10.

Privacy preserving on data publication has been an important research field over the past few decades. One of the fundamental challenges in privacy preserving data publication is the trade-off problem between privacy and utility of the single and independent data set. However, recent research works have shown that the advanced privacy mechanism, i.e., differential privacy, is vulnerable when multiple data sets are correlated. In this case, the trade-off problem between privacy and utility is evolved into a game problem, in which the payoff of each player is dependent not only on his privacy parameter, but also on his neighbors' privacy parameters. In this paper, we firstly present the definition of correlated differential privacy to evaluate the real privacy level of a single data set influenced by the other data sets. Then, we construct a game model of multiple players, who each publishes the data set sanitized by differential privacy. Next, we analyze the existence and uniqueness of the pure Nash Equilibrium and demonstrate the sufficient conditions in the game. Finally, we refer to a notion, i.e., the price of anarchy, to evaluate efficiency of the pure Nash Equilibrium.

Ping, Haoyue, Stoyanovich, Julia, Howe, Bill.  2017.  DataSynthesizer: Privacy-Preserving Synthetic Datasets. Proceedings of the 29th International Conference on Scientific and Statistical Database Management. :42:1–42:5.
To facilitate collaboration over sensitive data, we present DataSynthesizer, a tool that takes a sensitive dataset as input and generates a structurally and statistically similar synthetic dataset with strong privacy guarantees. The data owners need not release their data, while potential collaborators can begin developing models and methods with some confidence that their results will work similarly on the real dataset. The distinguishing feature of DataSynthesizer is its usability — the data owner does not have to specify any parameters to start generating and sharing data safely and effectively. DataSynthesizer consists of three high-level modules — DataDescriber, DataGenerator and ModelInspector. The first, DataDescriber, investigates the data types, correlations and distributions of the attributes in the private dataset, and produces a data summary, adding noise to the distributions to preserve privacy. DataGenerator samples from the summary computed by DataDescriber and outputs synthetic data. ModelInspector shows an intuitive description of the data summary that was computed by DataDescriber, allowing the data owner to evaluate the accuracy of the summarization process and adjust any parameters, if desired. We describe DataSynthesizer and illustrate its use in an urban science context, where sharing sensitive, legally encumbered data between agencies and with outside collaborators is reported as the primary obstacle to data-driven governance. The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/DataSynthesizer.
Deng, Xiyue, Mirkovic, Jelena.  2017.  Commoner Privacy And A Study On Network Traces. Proceedings of the 33rd Annual Computer Security Applications Conference. :566–576.
Differential privacy has emerged as a promising mechanism for privacy-safe data mining. One popular differential privacy mechanism allows researchers to pose queries over a dataset, and adds random noise to all output points to protect privacy. While differential privacy produces useful data in many scenarios, added noise may jeopardize utility for queries posed over small populations or over long-tailed datasets. Gehrke et al. proposed crowd-blending privacy, with random noise added only to those output points where fewer than k individuals (a configurable parameter) contribute to the point in the same manner. This approach has a lower privacy guarantee, but preserves more research utility than differential privacy. We propose an even more liberal privacy goal—commoner privacy—which fuzzes (omits, aggregates or adds noise to) only those output points where an individual's contribution to this point is an outlier. By hiding outliers, our mechanism hides the presence or absence of an individual in a dataset. We propose one mechanism that achieves commoner privacy—interactive k-anonymity. We also discuss query composition and show how we can guarantee privacy via either a pre-sampling step or via query introspection. We implement interactive k-anonymity and query introspection in a system called Patrol for network trace processing. Our evaluation shows that commoner privacy prevents common attacks while preserving orders of magnitude higher research utility than differential privacy, and at least 9-49 times the utility of crowd-blending privacy.
Zhang, Jun, Cormode, Graham, Procopiuc, Cecilia M., Srivastava, Divesh, Xiao, Xiaokui.  2017.  PrivBayes: Private Data Release via Bayesian Networks. ACM Trans. Database Syst.. 42:25:1–25:41.
Privacy-preserving data publishing is an important problem that has been the focus of extensive study. The state-of-the-art solution for this problem is differential privacy, which offers a strong degree of privacy protection without making restrictive assumptions about the adversary. Existing techniques using differential privacy, however, cannot effectively handle the publication of high-dimensional data. In particular, when the input dataset contains a large number of attributes, existing methods require injecting a prohibitive amount of noise compared to the signal in the data, which renders the published data next to useless. To address the deficiency of the existing methods, this paper presents PrivBayes, a differentially private method for releasing high-dimensional data. Given a dataset D, PrivBayes first constructs a Bayesian network N, which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of low-dimensional marginals of D. After that, PrivBayes injects noise into each marginal in P to ensure differential privacy and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, PrivBayes samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data. Intuitively, PrivBayes circumvents the curse of dimensionality, as it injects noise into the low-dimensional marginals in P instead of the high-dimensional dataset D. Private construction of Bayesian networks turns out to be significantly challenging, and we introduce a novel approach that uses a surrogate function for mutual information to build the model more accurately. We experimentally evaluate PrivBayes on real data and demonstrate that it significantly outperforms existing solutions in terms of accuracy.
He, Zaobo, Cai, Zhipeng, Sun, Yunchuan, Li, Yingshu, Cheng, Xiuzhen.  2017.  Customized Privacy Preserving for Inherent Data and Latent Data. Personal Ubiquitous Comput.. 21:43–54.
The huge amount of sensory data collected from mobile devices has offered great potentials to promote more significant services based on user data extracted from sensor readings. However, releasing user data could also seriously threaten user privacy. It is possible to directly collect sensitive information from released user data without user permissions. Furthermore, third party users can also infer sensitive information contained in released data in a latent manner by utilizing data mining techniques. In this paper, we formally define these two types of threats as inherent data privacy and latent data privacy and construct a data-sanitization strategy that can optimize the tradeoff between data utility and customized two types of privacy. The key novel idea lies that the developed strategy can combat against powerful third party users with broad knowledge about users and launching optimal inference attacks. We show that our strategy does not reduce the benefit brought by user data much, while sensitive information can still be protected. To the best of our knowledge, this is the first work that preserves both inherent data privacy and latent data privacy.
Aissaoui, K., idar, H. Ait, Belhadaoui, H., Rifi, M..  2017.  Survey on data remanence in Cloud Computing environment. 2017 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS). :1–4.

The Cloud Computing is a developing IT concept that faces some issues, which are slowing down its evolution and adoption by users across the world. The lack of security has been the main concern. Organizations and entities need to ensure, inter alia, the integrity and confidentiality of their outsourced sensible data within a cloud provider server. Solutions have been examined in order to strengthen security models (strong authentication, encryption and fragmentation before storing, access control policies...). More particularly, data remanence is undoubtedly a major threat. How could we be sure that data are, when is requested, truly and appropriately deleted from remote servers? In this paper, we aim to produce a survey about this interesting subject and to address the problem of residual data in a cloud-computing environment, which is characterized by the use of virtual machines instantiated in remote servers owned by a third party.

Vakilinia, I., Tosh, D. K., Sengupta, S..  2017.  3-Way game model for privacy-preserving cybersecurity information exchange framework. MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM). :829–834.

With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.

Jeyaprabha, T. J., Sumathi, G., Nivedha, P..  2017.  Smart and secure data storage using Encrypt-interleaving. 2017 Innovations in Power and Advanced Computing Technologies (i-PACT). :1–6.

In the recent years many companies are shifting towards cloud for expanding their business profit with least additional cost. Cloud computing is a growing technology which has emerged from the development of grid computing, virtualization and utility computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. There was a huge data loss during the recent Chennai floods during Dec 2015. If these data would have been stored at distributed data centers great loss could have been prevented. Though, such natural calamities are tempting many users to shift towards the cloud storage, security threats are inhibiting them to shift towards the cloud. Many solutions have been addressed for these security issues but they do not give guaranteed security. By guaranteed security we mean confidentiality, integrity and availability. Some of the existing techniques for providing security are Cryptographic Protocols, Data Sanitization, Predicate Logic, Access Control Mechanism, Honeypots, Sandboxing, Erasure Coding, RAID(Redundant Arrays of Independent Disks), Homomorphic Encryption and Split-Key Encryption. All these techniques either cannot work alone or adds computational and time complexity. An alternate scheme of combining encryption and channel coding schemes at one-go is proposed for increasing the levels of security. Hybrid encryption scheme is proposed to be used in the interleaver block of Turbo coder for avoiding burst error. Hybrid encryption avoids sharing of secret key via the unsecured channel. This provides both security and reliability by reducing error propagation effect with small additional cost and computational overhead. Time complexity can be reduced when encryption and encoding are done as a single process.

Zaman, A. N. K., Obimbo, C., Dara, R. A..  2017.  An improved differential privacy algorithm to protect re-identification of data. 2017 IEEE Canada International Humanitarian Technology Conference (IHTC). :133–138.

In the present time, there has been a huge increase in large data repositories by corporations, governments, and healthcare organizations. These repositories provide opportunities to design/improve decision-making systems by mining trends and patterns from the data set (that can provide credible information) to improve customer service (e.g., in healthcare). As a result, while data sharing is essential, it is an obligation to maintaining the privacy of the data donors as data custodians have legal and ethical responsibilities to secure confidentiality. This research proposes a 2-layer privacy preserving (2-LPP) data sanitization algorithm that satisfies ε-differential privacy for publishing sanitized data. The proposed algorithm also reduces the re-identification risk of the sanitized data. The proposed algorithm has been implemented, and tested with two different data sets. Compared to other existing works, the results obtained from the proposed algorithm show promising performance.

Procter, Sam, Vasserman, Eugene Y., Hatcliff, John.  2017.  SAFE and Secure: Deeply Integrating Security in a New Hazard Analysis. Proceedings of the 12th International Conference on Availability, Reliability and Security. :66:1–66:10.

Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis. 

Robyns, Pieter, Quax, Peter, Lamotte, Wim.  2017.  PHY-layer Security is No Alternative to Cryptography. Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks. :160–162.

In recent works, numerous physical-layer security systems have been proposed as alternatives to classic cryptography. Such systems aim to use the intrinsic properties of radio signals and the wireless medium to provide confidentiality and authentication to wireless devices. However, fundamental vulnerabilities are often discovered in these systems shortly after their inception. We therefore challenge the assumptions made by existing physical-layer security systems, and postulate that weaker assumptions are needed in order to adapt for practical scenarios. We also argue that if no computational advantage over an adversary can be ensured, secure communication cannot be realistically achieved.

Oosterhuis, Harrie, de Rijke, Maarten.  2017.  Sensitive and Scalable Online Evaluation with Theoretical Guarantees. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. :77–86.
Multileaved comparison methods generalize interleaved comparison methods to provide a scalable approach for comparing ranking systems based on regular user interactions. Such methods enable the increasingly rapid research and development of search engines. However, existing multileaved comparison methods that provide reliable outcomes do so by degrading the user experience during evaluation. Conversely, current multileaved comparison methods that maintain the user experience cannot guarantee correctness. Our contribution is two-fold. First, we propose a theoretical framework for systematically comparing multileaved comparison methods using the notions of considerateness, which concerns maintaining the user experience, and fidelity, which concerns reliable correct outcomes. Second, we introduce a novel multileaved comparison method, Pairwise Preference Multileaving (PPM), that performs comparisons based on document-pair preferences, and prove that it is considerate and has fidelity. We show empirically that, compared to previous multileaved comparison methods, PPM is more sensitive to user preferences and scalable with the number of rankers being compared.
Aman, Muhammad Naveed, Chua, Kee Chaing, Sikdar, Biplab.  2017.  Secure Data Provenance for the Internet of Things. Proceedings of the 3rd ACM International Workshop on IoT Privacy, Trust, and Security. :11–14.

The vision of smart environments, systems, and services is driven by the development of the Internet of Things (IoT). IoT devices produce large amounts of data and this data is used to make critical decisions in many systems. The data produced by these devices has to satisfy various security related requirements in order to be useful in practical scenarios. One of these requirements is data provenance which allows a user to trust the data regarding its origin and location. The low cost of many IoT devices and the fact that they may be deployed in unprotected spaces requires security protocols to be efficient and secure against physical attacks. This paper proposes a light-weight protocol for data provenance in the IoT. The proposed protocol uses physical unclonable functions (PUFs) to provide physical security and uniquely identify an IoT device. Moreover, wireless channel characteristics are used to uniquely identify a wireless link between an IoT device and a server/user. A brief security and performance analysis are presented to give a preliminary validation of the protocol.

Laube, Stefan, Böhme, Rainer.  2017.  Strategic Aspects of Cyber Risk Information Sharing. ACM Comput. Surv.. 50:77:1–77:36.

Cyber risk management largely reduces to a race for information between defenders of ICT systems and attackers. Defenders can gain advantage in this race by sharing cyber risk information with each other. Yet, they often exchange less information than is socially desirable, because sharing decisions are guided by selfish rather than altruistic reasons. A growing line of research studies these strategic aspects that drive defenders’ sharing decisions. The present survey systematizes these works in a novel framework. It provides a consolidated understanding of defenders’ strategies to privately or publicly share information and enables us to distill trends in the literature and identify future research directions. We reveal that many theoretical works assume cyber risk information sharing to be beneficial, while empirical validations are often missing.

Zhou, Lu, Liu, Qiao, Wang, Yong, Li, Hui.  2017.  Secure Group Information Exchange Scheme for Vehicular Ad Hoc Networks. Personal Ubiquitous Comput.. 21:903–910.

In this paper, a novel secure information exchange scheme has been proposed for MIMO vehicular ad hoc networks (VANETs) through physical layer approach. In the scheme, a group of On Board Units (OBUs) exchange information with help of one Road Side Unit (RSU). By utilizing the key signal processing technique, i.e., Direction Rotation Alignment technique, the information to be exchanged of the two neighbor OBUs are aligned into a same direction to form summed signal at RSU or external eavesdroppers. With such summed signal, the RSU or the eavesdropper cannot recover the individual information from the OBUs. By regulating the transmission rate for each OBU, the information theoretic security could be achieved. The secrecy sum-rates of the proposed scheme are analyzed following the scheme. Finally, the numerical results are conducted to demonstrate the theoretical analysis.

Hamasaki, J., Iwamura, K..  2017.  Geometric group key-sharing scheme using euclidean distance. 2017 14th IEEE Annual Consumer Communications Networking Conference (CCNC). :1004–1005.

A wireless sensor network (WSN) is composed of sensor nodes and a base station. In WSNs, constructing an efficient key-sharing scheme to ensure a secure communication is important. In this paper, we propose a new key-sharing scheme for groups, which shares a group key in a single broadcast without being dependent on the number of nodes. This scheme is based on geometric characteristics and has information-theoretic security in the analysis of transmitted data. We compared our scheme with conventional schemes in terms of communication traffic, computational complexity, flexibility, and security, and the results showed that our scheme is suitable for an Internet-of-Things (IoT) network.

Wang, P., Safavi-Naini, R..  2017.  Interactive message transmission over adversarial wiretap channel II. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

In Wyner wiretap II model of communication, Alice and Bob are connected by a channel that can be eavesdropped by an adversary with unlimited computation who can select a fraction of communication to view, and the goal is to provide perfect information theoretic security. Information theoretic security is increasingly important because of the threat of quantum computers that can effectively break algorithms and protocols that are used in today's public key infrastructure. We consider interactive protocols for wiretap II channel with active adversary who can eavesdrop and add adversarial noise to the eavesdropped part of the codeword. These channels capture wireless setting where malicious eavesdroppers at reception distance of the transmitter can eavesdrop the communication and introduce jamming signal to the channel. We derive a new upperbound R ≤ 1 - ρ for the rate of interactive protocols over two-way wiretap II channel with active adversaries, and construct a perfectly secure protocol family with achievable rate 1 - 2ρ + ρ2. This is strictly higher than the rate of the best one round protocol which is 1 - 2ρ, hence showing that interaction improves rate. We also prove that even with interaction, reliable communication is possible only if ρ \textbackslashtextless; 1/2. An interesting aspect of this work is that our bounds will also hold in network setting when two nodes are connected by n paths, a ρ of which is corrupted by the adversary. We discuss our results, give their relations to the other works, and propose directions for future work.

Forutan, V., Elschner, R., Schmidt-Langhorst, C., Schubert, C., Fischer, R. F. H..  2017.  Towards Information-Theoretic Security in Optical Networks. Photonic Networks; 18. ITG-Symposium. :1–7.

In fiber-optic communication networks, research on data security at lower layers of the protocol stack and in particular at the physical layer by means of information-theoretic concepts is only in the beginning. Nevertheless, it has recently attracted quite some attention as it holds the promise of providing unconditional, perfect security without the need for secret key exchanges. In this paper, we analyze some important constraints that such concepts put on a potential implementation of physical-layer security. We review the fundamentals of physical-layer security on the basis of the commonly used AWGN wiretap channel model. For such channel model we summarize the security metrics which are typically used in information theory and in particular recall that, for secure communication over the AWGN channel, the legitimate receiver needs an SNR advantage over the eavesdropper. Next, we relate the information theoretic metrics to physically measurable quantities in optical communications engineering, namely optical signal-to-noise ratio (OSNR) and bit-error ratio (BER), and translate the information-theoretic wiretap scenario to a simple real-world point-to-point optical transmission link in which part of the light is wiretapped using a bend coupler. We investigate the achievable OSNR advantage under realistic assumptions for fiber loss, tap ratio, and noise budget and find that secure transmission is limited to a distance of a few tens of kilometers in this case. The maximum secure transmission distance decreases with an increasing tap ratio chosen by the eavesdropper. This can be only counteracted by monitoring the link loss towards the legitimate receiver which would force the eavesdropper to choose small tap ratios in order to remain undetected. In an outlook towards further research directions we identify information-theoretic approaches which could potentially allow to realize physical-layer security in more generalized scenarios or over longer distances.

Stoughton, A., Varia, M..  2017.  Mechanizing the Proof of Adaptive, Information-Theoretic Security of Cryptographic Protocols in the Random Oracle Model. 2017 IEEE 30th Computer Security Foundations Symposium (CSF). :83–99.

We report on our research on proving the security of multi-party cryptographic protocols using the EASYCRYPT proof assistant. We work in the computational model using the sequence of games approach, and define honest-butcurious (semi-honest) security using a variation of the real/ideal paradigm in which, for each protocol party, an adversary chooses protocol inputs in an attempt to distinguish the party's real and ideal games. Our proofs are information-theoretic, instead of being based on complexity theory and computational assumptions. We employ oracles (e.g., random oracles for hashing) whose encapsulated states depend on dynamically-made, nonprogrammable random choices. By limiting an adversary's oracle use, one may obtain concrete upper bounds on the distances between a party's real and ideal games that are expressed in terms of game parameters. Furthermore, our proofs work for adaptive adversaries, ones that, when choosing the value of a protocol input, may condition this choice on their current protocol view and oracle knowledge. We provide an analysis in EASYCRYPT of a three party private count retrieval protocol. We emphasize the lessons learned from completing this proof.

2017-12-04
Al-Shomrani, A., Fathy, F., Jambi, K..  2017.  Policy enforcement for big data security. 2017 2nd International Conference on Anti-Cyber Crimes (ICACC). :70–74.

Security and privacy of big data becomes challenging as data grows and more accessible by more and more clients. Large-scale data storage is becoming a necessity for healthcare, business segments, government departments, scientific endeavors and individuals. Our research will focus on the privacy, security and how we can make sure that big data is secured. Managing security policy is a challenge that our framework will handle for big data. Privacy policy needs to be integrated, flexible, context-aware and customizable. We will build a framework to receive data from customer and then analyze data received, extract privacy policy and then identify the sensitive data. In this paper we will present the techniques for privacy policy which will be created to be used in our framework.

2017-11-20
Haq, M. S. Ul, Lejian, L., Lerong, M..  2016.  Transitioning Native Application into Virtual Machine by Using Hardware Virtualization Extensions. 2016 International Symposium on Computer, Consumer and Control (IS3C). :397–403.

In presence of known and unknown vulnerabilities in code and flow control of programs, virtual machine alike isolation and sandboxing to confine maliciousness of process, by monitoring and controlling the behaviour of untrusted application, is an effective strategy. A confined malicious application cannot effect system resources and other applications running on same operating system. But present techniques used for sandboxing have some drawbacks ranging from scope to methodology. Some of proposed techniques restrict specific aspect of execution e.g. system calls and file system access. In the same way techniques that truly isolate the application by providing separate execution environment either require modification in kernel or full blown operating system. Moreover these do not provide isolation from top to bottom but only virtualize operating system services. In this paper, we propose a design to confine native Linux process in virtual machine equivalent isolation by using hardware virtualization extensions with nominal initialization and acceptable execution overheads. We implemented our prototype called Process Virtual Machine that transition a native process into virtual machine, provides minimal possible execution environment, intercept and virtualize system calls to execute it on host kernel. Experimental results show effectiveness of our proposed technique.