Visible to the public Biblio

Filters: Keyword is black box encryption  [Clear All Filters]
2023-03-31
Luo, Xingqi, Wang, Haotian, Dong, Jinyang, Zhang, Chuan, Wu, Tong.  2022.  Achieving Privacy-preserving Data Sharing for Dual Clouds. 2022 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). :139–146.
With the advent of the era of Internet of Things (IoT), the increasing data volume leads to storage outsourcing as a new trend for enterprises and individuals. However, data breaches frequently occur, bringing significant challenges to the privacy protection of the outsourced data management system. There is an urgent need for efficient and secure data sharing schemes for the outsourced data management infrastructure, such as the cloud. Therefore, this paper designs a dual-server-based data sharing scheme with data privacy and high efficiency for the cloud, enabling the internal members to exchange their data efficiently and securely. Dual servers guarantee that none of the servers can get complete data independently by adopting secure two-party computation. In our proposed scheme, if the data is destroyed when sending it to the user, the data will not be restored. To prevent the malicious deletion, the data owner adds a random number to verify the identity during the uploading procedure. To ensure data security, the data is transmitted in ciphertext throughout the process by using searchable encryption. Finally, the black-box leakage analysis and theoretical performance evaluation demonstrate that our proposed data sharing scheme provides solid security and high efficiency in practice.
Biswas, Ankur, K V, Pradeep, Kumar Pandey, Arvind, Kumar Shukla, Surendra, Raj, Tej, Roy, Abhishek.  2022.  Hybrid Access Control for Atoring Large Data with Security. 2022 International Interdisciplinary Humanitarian Conference for Sustainability (IIHC). :838–844.
Although the public cloud is known for its incredible capabilities, consumers cannot totally depend on cloud service providers to keep personal data because to the lack of client maneuverability. To protect privacy, data controllers outsourced encryption keys rather than providing information. Crypt - text to conduct out okay and founder access control and provide the encryption keys with others, innate quality Aes (CP-ABE) may be employed. This, however, falls short of effectively protecting against new dangers. The public cloud was unable to validate if a downloader could decode using a number of older methods. Therefore, these files should be accessible to everyone having access to a data storage. A malicious attacker may download hundreds of files in order to launch Economic Deny of Sustain (EDoS) attacks, greatly depleting the cloud resource. The user of cloud storage is responsible for paying the fee. Additionally, the public cloud serves as both the accountant and the payer of resource consumption costs, without offering data owners any information. Cloud infrastructure storage should assuage these concerns in practice. In this study, we provide a technique for resource accountability and defense against DoS attacks for encrypted cloud storage tanks. It uses black-box CP-ABE techniques and abides by the access policy of CP-arbitrary ABE. After presenting two methods for different parameters, speed and security evaluations are given.
Yuan, Dandan, Cui, Shujie, Russello, Giovanni.  2022.  We Can Make Mistakes: Fault-tolerant Forward Private Verifiable Dynamic Searchable Symmetric Encryption. 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P). :587–605.
Verifiable Dynamic Searchable Symmetric Encryption (VDSSE) enables users to securely outsource databases (document sets) to cloud servers and perform searches and updates. The verifiability property prevents users from accepting incorrect search results returned by a malicious server. However, we discover that the community currently only focuses on preventing malicious behavior from the server but ignores incorrect updates from the client, which are very likely to happen since there is no record on the client to check. Indeed most existing VDSSE schemes are not sufficient to tolerate incorrect updates from the client. For instance, deleting a nonexistent keyword-identifier pair can break their correctness and soundness. In this paper, we demonstrate the vulnerabilities of a type of existing VDSSE schemes that fail them to ensure correctness and soundness properties on incorrect updates. We propose an efficient fault-tolerant solution that can consider any DSSE scheme as a black-box and make them into a fault-tolerant VDSSE in the malicious model. Forward privacy is an important property of DSSE that prevents the server from linking an update operation to previous search queries. Our approach can also make any forward secure DSSE scheme into a fault-tolerant VDSSE without breaking the forward security guarantee. In this work, we take FAST [1] (TDSC 2020), a forward secure DSSE, as an example, implement a prototype of our solution, and evaluate its performance. Even when compared with the previous fastest forward private construction that does not support fault tolerance, the experiments show that our construction saves 9× client storage and has better search and update efficiency.
Hirahara, Shuichi.  2022.  NP-Hardness of Learning Programs and Partial MCSP. 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS). :968–979.
A long-standing open question in computational learning theory is to prove NP-hardness of learning efficient programs, the setting of which is in between proper learning and improper learning. Ko (COLT’90, SICOMP’91) explicitly raised this open question and demonstrated its difficulty by proving that there exists no relativizing proof of NP-hardness of learning programs. In this paper, we overcome Ko’s relativization barrier and prove NP-hardness of learning programs under randomized polynomial-time many-one reductions. Our result is provably non-relativizing, and comes somewhat close to the parameter range of improper learning: We observe that mildly improving our inapproximability factor is sufficient to exclude Heuristica, i.e., show the equivalence between average-case and worst-case complexities of N P. We also make progress on another long-standing open question of showing NP-hardness of the Minimum Circuit Size Problem (MCSP). We prove NP-hardness of the partial function variant of MCSP as well as other meta-computational problems, such as the problems MKTP* and MINKT* of computing the time-bounded Kolmogorov complexity of a given partial string, under randomized polynomial-time reductions. Our proofs are algorithmic information (a.k. a. Kolmogorov complexity) theoretic. We utilize black-box pseudorandom generator constructions, such as the Nisan-Wigderson generator, as a one-time encryption scheme secure against a program which “does not know” a random function. Our key technical contribution is to quantify the “knowledge” of a program by using conditional Kolmogorov complexity and show that no small program can know many random functions.
Zhang, Hui, Ding, Jianing, Tan, Jianlong, Gou, Gaopeng, Shi, Junzheng.  2022.  Classification of Mobile Encryption Services Based on Context Feature Enhancement. 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). :860–866.
Smart phones have become the preferred way for Chinese Internet users currently. The mobile phone traffic is large from the operating system. These traffic is mainly generated by the services. In the context of the universal encryption of the traffic, classification identification of mobile encryption services can effectively reduce the difficulty of analytical difficulty due to mobile terminals and operating system diversity, and can more accurately identify user access targets, and then enhance service quality and network security management. The existing mobile encryption service classification methods have two shortcomings in feature selection: First, the DL model is used as a black box, and the features of large dimensions are not distinguished as input of classification model, which resulting in sharp increase in calculation complexity, and the actual application is limited. Second, the existing feature selection method is insufficient to use the time and space associated information of traffic, resulting in less robustness and low accuracy of the classification. In this paper, we propose a feature enhancement method based on adjacent flow contextual features and evaluate the Apple encryption service traffic collected from the real world. Based on 5 DL classification models, the refined classification accuracy of Apple services is significantly improved. Our work can provide an effective solution for the fine management of mobile encryption services.
B S, Sahana Raj, Venugopalachar, Sridhar.  2022.  Traitor Tracing in Broadcast Encryption using Vector Keys. 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon). :1–5.
Secured data transmission between one to many authorized users is achieved through Broadcast Encryption (BE). In BE, the source transmits encrypted data to multiple registered users who already have their decrypting keys. The Untrustworthy users, known as Traitors, can give out their secret keys to a hacker to form a pirate decoding system to decrypt the original message on the sly. The process of detecting the traitors is known as Traitor Tracing in cryptography. This paper presents a new Black Box Tracing method that is fully collusion resistant and it is designated as Traitor Tracing in Broadcast Encryption using Vector Keys (TTBE-VK). The proposed method uses integer vectors in the finite field Zp as encryption/decryption/tracing keys, reducing the computational cost compared to the existing methods.
2021-03-09
Rahmati, A., Moosavi-Dezfooli, S.-M., Frossard, P., Dai, H..  2020.  GeoDA: A Geometric Framework for Black-Box Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8443–8452.
Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-1 label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small p norms which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for p=2, we theoretically show that our algorithm actually converges to the minimal perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries.
Herrera, A. E. Hinojosa, Walshaw, C., Bailey, C..  2020.  Improving Black Box Classification Model Veracity for Electronics Anomaly Detection. 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA). :1092–1097.
Data driven classification models are useful to assess quality of manufactured electronics. Because decisions are taken based on the models, their veracity is relevant, covering aspects such as accuracy, transparency and clarity. The proposed BB-Stepwise algorithm aims to improve the classification model transparency and accuracy of black box models. K-Nearest Neighbours (KNN) is a black box model which is easy to implement and has achieved good classification performance in different applications. In this paper KNN-Stepwise is illustrated for fault detection of electronics devices. The results achieved shows that the proposed algorithm was able to improve the accuracy, veracity and transparency of KNN models and achieve higher transparency and clarity, and at least similar accuracy than when using Decision Tree models.
Guibene, K., Ayaida, M., Khoukhi, L., MESSAI, N..  2020.  Black-box System Identification of CPS Protected by a Watermark-based Detector. 2020 IEEE 45th Conference on Local Computer Networks (LCN). :341–344.

The implication of Cyber-Physical Systems (CPS) in critical infrastructures (e.g., smart grids, water distribution networks, etc.) has introduced new security issues and vulnerabilities to those systems. In this paper, we demonstrate that black-box system identification using Support Vector Regression (SVR) can be used efficiently to build a model of a given industrial system even when this system is protected with a watermark-based detector. First, we briefly describe the Tennessee Eastman Process used in this study. Then, we present the principal of detection scheme and the theory behind SVR. Finally, we design an efficient black-box SVR algorithm for the Tennessee Eastman Process. Extensive simulations prove the efficiency of our proposed algorithm.

Rojas-Dueñas, G., Riba, J., Kahalerras, K., Moreno-Eguilaz, M., Kadechkar, A., Gomez-Pau, A..  2020.  Black-Box Modelling of a DC-DC Buck Converter Based on a Recurrent Neural Network. 2020 IEEE International Conference on Industrial Technology (ICIT). :456–461.
Artificial neural networks allow the identification of black-box models. This paper proposes a method aimed at replicating the static and dynamic behavior of a DC-DC power converter based on a recurrent nonlinear autoregressive exogenous neural network. The method proposed in this work applies an algorithm that trains a neural network based on the inputs and outputs (currents and voltages) of a Buck converter. The approach is validated by means of simulated data of a realistic nonsynchronous Buck converter model programmed in Simulink and by means of experimental results. The predictions made by the neural network are compared to the actual outputs of the system, to determine the accuracy of the method, thus validating the proposed approach. Both simulation and experimental results show the feasibility and accuracy of the proposed black-box approach.
Mashhadi, M. J., Hemmati, H..  2020.  Hybrid Deep Neural Networks to Infer State Models of Black-Box Systems. 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). :299–311.
Inferring behavior model of a running software system is quite useful for several automated software engineering tasks, such as program comprehension, anomaly detection, and testing. Most existing dynamic model inference techniques are white-box, i.e., they require source code to be instrumented to get run-time traces. However, in many systems, instrumenting the entire source code is not possible (e.g., when using black-box third-party libraries) or might be very costly. Unfortunately, most black-box techniques that detect states over time are either univariate, or make assumptions on the data distribution, or have limited power for learning over a long period of past behavior. To overcome the above issues, in this paper, we propose a hybrid deep neural network that accepts as input a set of time series, one per input/output signal of the system, and applies a set of convolutional and recurrent layers to learn the non-linear correlations between signals and the patterns, over time. We have applied our approach on a real UAV auto-pilot solution from our industry partner with half a million lines of C code. We ran 888 random recent system-level test cases and inferred states, over time. Our comparison with several traditional time series change point detection techniques showed that our approach improves their performance by up to 102%, in terms of finding state change points, measured by F1 score. We also showed that our state classification algorithm provides on average 90.45% F1 score, which improves traditional classification algorithms by up to 17%.
Cui, W., Li, X., Huang, J., Wang, W., Wang, S., Chen, J..  2020.  Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation. 2020 IEEE International Conference on Image Processing (ICIP). :648–652.
Although deep convolutional neural network (CNN) performs well in many computer vision tasks, its classification mechanism is very vulnerable when it is exposed to the perturbation of adversarial attacks. In this paper, we proposed a new algorithm to generate the substitute model of black-box CNN models by using knowledge distillation. The proposed algorithm distills multiple CNN teacher models to a compact student model as the substitution of other black-box CNN models to be attacked. The black-box adversarial samples can be consequently generated on this substitute model by using various white-box attacking methods. According to our experiments on ResNet18 and DenseNet121, our algorithm boosts the attacking success rate (ASR) by 20% by training the substitute model based on knowledge distillation.
MATSUNAGA, Y., AOKI, N., DOBASHI, Y., KOJIMA, T..  2020.  A Black Box Modeling Technique for Distortion Stomp Boxes Using LSTM Neural Networks. 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). :653–656.
This paper describes an experimental result of modeling stomp boxes of the distortion effect based on a machine learning approach. Our proposed technique models a distortion stomp box as a neural network consisting of LSTM layers. In this approach, the neural network is employed for learning the nonlinear behavior of the distortion stomp boxes. All the parameters for replicating the distortion sound are estimated through its training process using the input and output signals obtained from some commercial stomp boxes. The experimental result indicates that the proposed technique may have a certain appropriateness to replicate the distortion sound by using the well-trained neural networks.
2021-03-04
Kalin, J., Ciolino, M., Noever, D., Dozier, G..  2020.  Black Box to White Box: Discover Model Characteristics Based on Strategic Probing. 2020 Third International Conference on Artificial Intelligence for Industries (AI4I). :60—63.

In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored - image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.

Crescenzo, G. D., Bahler, L., McIntosh, A..  2020.  Encrypted-Input Program Obfuscation: Simultaneous Security Against White-Box and Black-Box Attacks. 2020 IEEE Conference on Communications and Network Security (CNS). :1—9.

We consider the problem of protecting cloud services from simultaneous white-box and black-box attacks. Recent research in cryptographic program obfuscation considers the problem of protecting the confidentiality of programs and any secrets in them. In this model, a provable program obfuscation solution makes white-box attacks to the program not more useful than black-box attacks. Motivated by very recent results showing successful black-box attacks to machine learning programs run by cloud servers, we propose and study the approach of augmenting the program obfuscation solution model so to achieve, in at least some class of application scenarios, program confidentiality in the presence of both white-box and black-box attacks.We propose and formally define encrypted-input program obfuscation, where a key is shared between the entity obfuscating the program and the entity encrypting the program's inputs. We believe this model might be of interest in practical scenarios where cloud programs operate over encrypted data received by associated sensors (e.g., Internet of Things, Smart Grid).Under standard intractability assumptions, we show various results that are not known in the traditional cryptographic program obfuscation model; most notably: Yao's garbled circuit technique implies encrypted-input program obfuscation hiding all gates of an arbitrary polynomial circuit; and very efficient encrypted-input program obfuscation for range membership programs and a class of machine learning programs (i.e., decision trees). The performance of the latter solutions has only a small constant overhead over the equivalent unobfuscated program.

2021-03-01
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.

2020-09-04
Shi, Yang, Zhang, Qing, Liang, Jingwen, He, Zongjian, Fan, Hongfei.  2019.  Obfuscatable Anonymous Authentication Scheme for Mobile Crowd Sensing. IEEE Systems Journal. 13:2918—2929.

Mobile crowd sensing (MCS) is a rapidly developing technique for information collection from the users of mobile devices. This technique deals with participants' personal information such as their identities and locations, thus raising significant security and privacy concerns. Accordingly, anonymous authentication schemes have been widely considered for preserving participants' privacy in MCS. However, mobile devices are easy to lose and vulnerable to device capture attacks, which enables an attacker to extract the private authentication key of a mobile application and to further invade the user's privacy by linking sensed data with the user's identity. To address this issue, we have devised a special anonymous authentication scheme where the authentication request algorithm can be obfuscated into an unintelligible form and thus the authentication key is not explicitly used. This scheme not only achieves authenticity and unlinkability for participants, but also resists impersonation, replay, denial-of-service, man-in-the-middle, collusion, and insider attacks. The scheme's obfuscation algorithm is the first obfuscator for anonymous authentication, and it satisfies the average-case secure virtual black-box property. The scheme also supports batch verification of authentication requests for improving efficiency. Performance evaluations on a workstation and smart phones have indicated that our scheme works efficiently on various devices.

Teng, Jikai, Ma, Hongyang.  2019.  Dynamic asymmetric group key agreement protocol with traitor traceability. IET Information Security. 13:703—710.
In asymmetric group key agreement (ASGKA) protocols, a group of users establish a common encryption key which is publicly accessible and compute pairwise different decryption keys. It is left as an open problem to design an ASGKA protocol with traitor traceability in Eurocrypt 2009. A one-round dynamic authenticated ASGKA protocol with public traitor traceability is proposed in this study. It provides a black-box tracing algorithm. Ind-CPA security with key compromise impersonation resilience (KCIR) and forward secrecy of ASGKA protocols is formally defined. The proposed protocol is proved to be Ind-CPA secure with KCIR and forward secrecy under D k-HDHE assumption. It is also proved that the proposed protocol resists collusion attack. In Setup algorithm and Join algorithm, one communication round is required. In Leave algorithm, no message is required to be transmitted. The proposed protocol adopts O(log N)-way asymmetric multilinear map to make the size of public key and the size of ciphertext both achieve O(logN), where N is the number of potential group members. This is the first ASGKA protocol with public traitor traceability which is more efficient than trivial construction of ASGKA protocols.
Li, Chengqing, Feng, Bingbing, Li, Shujun, Kurths, Jüergen, Chen, Guanrong.  2019.  Dynamic Analysis of Digital Chaotic Maps via State-Mapping Networks. IEEE Transactions on Circuits and Systems I: Regular Papers. 66:2322—2335.
Chaotic dynamics is widely used to design pseudo-random number generators and for other applications, such as secure communications and encryption. This paper aims to study the dynamics of the discrete-time chaotic maps in the digital (i.e., finite-precision) domain. Differing from the traditional approaches treating a digital chaotic map as a black box with different explanations according to the test results of the output, the dynamical properties of such chaotic maps are first explored with a fixed-point arithmetic, using the Logistic map and the Tent map as two representative examples, from a new perspective with the corresponding state-mapping networks (SMNs). In an SMN, every possible value in the digital domain is considered as a node and the mapping relationship between any pair of nodes is a directed edge. The scale-free properties of the Logistic map's SMN are proved. The analytic results are further extended to the scenario of floating-point arithmetic and for other chaotic maps. Understanding the network structure of a chaotic map's SMN in digital computers can facilitate counteracting the undesirable degeneration of chaotic dynamics in finite-precision domains, also helping to classify and improve the randomness of pseudo-random number sequences generated by iterating the chaotic maps.
Li, Ge, Iyer, Vishnuvardhan, Orshansky, Michael.  2019.  Securing AES against Localized EM Attacks through Spatial Randomization of Dataflow. 2019 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :191—197.
A localized electromagnetic (EM) attack is a potent threat to security of embedded cryptographic implementations. The attack utilizes high resolution EM probes to localize and exploit information leakage in sub-circuits of a system, providing information not available in traditional EM and power attacks. In this paper, we propose a countermeasure based on randomizing the assignment of sensitive data to parallel datapath components in a high-performance implementation of AES. In contrast to a conventional design where each state register byte is routed to a fixed S-box, a permutation network, controlled by a transient random value, creates a dynamic random mapping between the state registers and the set of S-boxes. This randomization results in a significant reduction of exploitable leakage.We demonstrate the countermeasure's effectiveness under two attack scenarios: a more powerful attack that assumes a fully controlled access to an attacked implementation for building a priori EM-profiles, and a generic attack based on the black-box model. Spatial randomization leads to a 150× increase of the minimum traces to disclosure (MTD) for the profiled attack and a 3.25× increase of MTD for the black-box model attack.
Qin, Baodong, Zheng, Dong.  2019.  Generic Approach to Outsource the Decryption of Attribute-Based Encryption in Cloud Computing. IEEE Access. 7:42331—42342.

The notion of attribute-based encryption with outsourced decryption (OD-ABE) was proposed by Green, Hohenberger, and Waters. In OD-ABE, the ABE ciphertext is converted to a partially-decrypted ciphertext that has a shorter bit length and a faster decryption time than that of the ABE ciphertext. In particular, the transformation can be performed by a powerful third party with a public transformation key. In this paper, we propose a generic approach for constructing ABE with outsourced decryption from standard ABE, as long as the later satisfies some additional properties. Its security can be reduced to the underlying standard ABE in the selective security model by a black-box way. To avoid the drawback of selective security in practice, we further propose a modified decryption outsourcing mode so that our generic construction can be adapted to satisfying adaptive security. This partially solves the open problem of constructing an OD-ABE scheme, and its adaptive security can be reduced to the underlying ABE scheme in a black-box way. Then, we present some concrete constructions that not only encompass existing ABE outsourcing schemes of Green et al., but also result in new selectively/adaptively-secure OD-ABE schemes with more efficient transformation key generation algorithm. Finally, we use the PBC library to test the efficiency of our schemes and compare the results with some previous ones, which shows that our schemes are more efficient in terms of decryption outsourcing and transformation key generation.

Zhao, Zhen, Lai, Jianchang, Susilo, Willy, Wang, Baocang, Hu, Yupu, Guo, Fuchun.  2019.  Efficient Construction for Full Black-Box Accountable Authority Identity-Based Encryption. IEEE Access. 7:25936—25947.

Accountable authority identity-based encryption (A-IBE), as an attractive way to guarantee the user privacy security, enables a malicious private key generator (PKG) to be traced if it generates and re-distributes a user private key. Particularly, an A-IBE scheme achieves full black-box security if it can further trace a decoder box and is secure against a malicious PKG who can access the user decryption results. In PKC'11, Sahai and Seyalioglu presented a generic construction for full black-box A-IBE from a primitive called dummy identity-based encryption, which is a hybrid between IBE and attribute-based encryption (ABE). However, as the complexity of ABE, their construction is inefficient and the size of private keys and ciphertexts in their instantiation is linear in the length of user identity. In this paper, we present a new efficient generic construction for full black-box A-IBE from a new primitive called token-based identity-based encryption (TB-IBE), without using ABE. We first formalize the definition and security model for TB-IBE. Subsequently, we show that a TB-IBE scheme satisfying some properties can be converted to a full black-box A-IBE scheme, which is as efficient as the underlying TB-IBE scheme in terms of computational complexity and parameter sizes. Finally, we give an instantiation with the computational complexity as O(1) and the constant size master key pair, private keys, and ciphertexts.

2019-03-18
Bos, J., Ducas, L., Kiltz, E., Lepoint, T., Lyubashevsky, V., Schanck, J. M., Schwabe, P., Seiler, G., Stehle, D..  2018.  CRYSTALS - Kyber: A CCA-Secure Module-Lattice-Based KEM. 2018 IEEE European Symposium on Security and Privacy (EuroS P). :353–367.
Rapid advances in quantum computing, together with the announcement by the National Institute of Standards and Technology (NIST) to define new standards for digitalsignature, encryption, and key-establishment protocols, have created significant interest in post-quantum cryptographic schemes. This paper introduces Kyber (part of CRYSTALS - Cryptographic Suite for Algebraic Lattices - a package submitted to NIST post-quantum standardization effort in November 2017), a portfolio of post-quantum cryptographic primitives built around a key-encapsulation mechanism (KEM), based on hardness assumptions over module lattices. Our KEM is most naturally seen as a successor to the NEWHOPE KEM (Usenix 2016). In particular, the key and ciphertext sizes of our new construction are about half the size, the KEM offers CCA instead of only passive security, the security is based on a more general (and flexible) lattice problem, and our optimized implementation results in essentially the same running time as the aforementioned scheme. We first introduce a CPA-secure public-key encryption scheme, apply a variant of the Fujisaki-Okamoto transform to create a CCA-secure KEM, and eventually construct, in a black-box manner, CCA-secure encryption, key exchange, and authenticated-key-exchange schemes. The security of our primitives is based on the hardness of Module-LWE in the classical and quantum random oracle models, and our concrete parameters conservatively target more than 128 bits of postquantum security.
Almazrooie, Mishal, Abdullah, Rosni, Samsudin, Azman, Mutter, Kussay N..  2018.  Quantum Grover Attack on the Simplified-AES. Proceedings of the 2018 7th International Conference on Software and Computer Applications. :204–211.

In this work, a quantum design for the Simplified-Advanced Encryption Standard (S-AES) algorithm is presented. Also, a quantum Grover attack is modeled on the proposed quantum S-AES. First, quantum circuits for the main components of S-AES in the finite field F2[x]/(x4 + x + 1), are constructed. Then, the constructed circuits are put together to form a quantum version of S-AES. A C-NOT synthesis is used to decompose some of the functions to reduce the number of the needed qubits. The quantum S-AES is integrated into a black-box queried by Grover's algorithm. A new approach is proposed to uniquely recover the secret key when Grover attack is applied. The entire work is simulated and tested on a quantum mechanics simulator. The complexity analysis shows that a block cipher can be designed as a quantum circuit with a polynomial cost. In addition, the secret key is recovered in quadratic speedup as promised by Grover's algorithm.

Marin, Eduard, Singelée, Dave, Yang, Bohan, Volski, Vladimir, Vandenbosch, Guy A. E., Nuttin, Bart, Preneel, Bart.  2018.  Securing Wireless Neurostimulators. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy. :287–298.

Implantable medical devices (IMDs) typically rely on proprietary protocols to wirelessly communicate with external device programmers. In this paper, we fully reverse engineer the proprietary protocol between a device programmer and a widely used commercial neurostimulator from one of the leading IMD manufacturers. For the reverse engineering, we follow a black-box approach and use inexpensive hardware equipment. We document the message format and the protocol state-machine, and show that the transmissions sent over the air are neither encrypted nor authenticated. Furthermore, we conduct several software radio-based attacks that could compromise the safety and privacy of patients, and investigate the feasibility of performing these attacks in real scenarios. Motivated by our findings, we propose a security architecture that allows for secure data exchange between the device programmer and the neurostimulator. It relies on using a patient»s physiological signal for generating a symmetric key in the neurostimulator, and transporting this key from the neurostimulator to the device programmer through a secret out-of-band (OOB) channel. Our solution allows the device programmer and the neurostimulator to agree on a symmetric session key without these devices needing to share any prior secrets; offers an effective and practical balance between security and permissive access in emergencies; requires only minor hardware changes in the devices; adds minimal computation and communication overhead; and provides forward and backward security. Finally, we implement a proof-of-concept of our solution.