Visible to the public Biblio

Found 1049 results

Filters: Keyword is policy-based governance  [Clear All Filters]
2019-05-20
Goncharov, N. I., Goncharov, I. V., Parinov, P. A., Dushkin, A. V., Maximova, M. M..  2019.  Modeling of Information Processes for Modern Information System Security Assessment. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1758-1763.

A new approach of a formalism of hybrid automatons has been proposed for the analysis of conflict processes between the information system and the information's security malefactor. An example of probability-based assessment on malefactor's victory has been given and the possibility to abstract from a specific type of probability density function for the residence time of parties to the conflict in their possible states. A model of the distribution of destructive informational influences in the information system to connect the process of spread of destructive information processes and the process of changing subjects' states of the information system has been proposed. An example of the destructive information processes spread analysis has been given.

Sadkhan, S. B., Reda, D. M..  2018.  A Proposed Security Evaluator for Cryptosystem Based on Information Theory and Triangular Game. 2018 International Conference on Advanced Science and Engineering (ICOASE). :306-311.

The purpose of this research is to propose a new mathematical model, designed to evaluate the security of cryptosystems. This model is a mixture of ideas from two basic mathematical theories, information theory and game theory. The role of information theory is assigning the model with security criteria of the cryptosystems. The role of game theory was to produce the value of the game which is representing the outcome of these criteria, which finally refers to cryptosystem's security. The proposed model support an accurate and mathematical way to evaluate the security of cryptosystems by unifying the criteria resulted from information theory and produce a unique reasonable value.

Cebe, Mumin, Kaplan, Berkay, Akkaya, Kemal.  2018.  A Network Coding Based Information Spreading Approach for Permissioned Blockchain in IoT Settings. Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. :470-475.

Permissioned Blockchain (PBC) has become a prevalent data structure to ensure that the records are immutable and secure. However, PBC still has significant challenges before it can be realized in different applications. One of such challenges is the overhead of the communication which is required to execute the Byzantine Agreement (BA) protocol that is needed for consensus building. As such, it may not be feasible to implement PBC for resource constrained environments such as Internet-of-Things (IoT). In this paper, we assess the communication overhead of running BA in an IoT environment that consists of wireless nodes (e.g., Raspberry PIs) with meshing capabilities. As the the packet loss ratio is significant and makes BA unfeasible to scale, we propose a network coding based approach that will reduce the packet overhead and minimize the consensus completion time of the BA. Specifically, various network coding approaches are designed as a replacement to TCP protocol which relies on unicasting and acknowledgements. The evaluation on a network of Raspberry PIs demonstrates that our approach can significantly improve scalability making BA feasible for medium size IoT networks.

Gschwandtner, Mathias, Demetz, Lukas, Gander, Matthias, Maier, Ronald.  2018.  Integrating Threat Intelligence to Enhance an Organization's Information Security Management. Proceedings of the 13th International Conference on Availability, Reliability and Security. :37:1-37:8.

As security incidents might have disastrous consequences on an enterprise's information technology (IT), organizations need to secure their IT against threats. Threat intelligence (TI) promises to provide actionable information about current threats for information security management systems (ISMS). Common information range from malware characteristics to observed perpetrator origins that allow customizing security controls. The aim of this article is to assess the impact of utilizing public available threat feeds within the corporate process on an organization's security information level. We developed a framework to integrate TI for large corporations and evaluated said framework in cooperation with a global acting manufacturer and retailer. During the development of the TI framework, a specific provider of TI was analyzed and chosen for integration within the process of vulnerability management. The evaluation of this exemplary integration was assessed by members of the information security department at the cooperating enterprise. During our evaluation it was emphasized that a prioritization of management activities based on whether threats that have been observed in the wild are targeting them or similar companies. Furthermore, indicators of compromise (IoC) provided by the chosen TI source, can be automatically integrated utilizing a provided software development kit. Theoretical relevance is based on the contribution towards the verification of proposed benefits of TI integration, such as increasing the resilience of an enterprise network, within a real-world environment. Overall, practitioners suggest that TI integration should result in enhanced management of security budgets and more resilient enterprise networks.

Hanauer, Tanja, Hommel, Wolfgang, Metzger, Stefan, Pöhn, Daniela.  2018.  A Process Framework for Stakeholder-Specific Visualization of Security Metrics. Proceedings of the 13th International Conference on Availability, Reliability and Security. :28:1-28:10.

Awareness and knowledge management are key components to achieve a high level of information security in organizations. However, practical evidence suggests that there are significant discrepancies between the typical elements of security awareness campaigns, the decisions made and goals set by top-level management, and routine operations carried out by systems administration personnel. This paper presents Vis4Sec, a process framework for the generation and distribution of stakeholder-specific visualizations of security metrics, which assists in closing the gap between theoretical and practical information security by respecting the different points of view of the involved security report audiences. An implementation for patch management on Linux servers, deployed at a large data center, is used as a running example.

Alamélou, Quentin, Berthier, Paul-Edmond, Cachet, Chloé, Cauchie, Stéphane, Fuller, Benjamin, Gaborit, Philippe, Simhadri, Sailesh.  2018.  Pseudoentropic Isometries: A New Framework for Fuzzy Extractor Reusability. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :673-684.

Fuzzy extractors (Dodiset al., Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key. Reusable fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen showed information-theoretically secure reusable fuzzy extractors are subject to strong limitations. Simoens et al. (IEEE S&P, 2009) then showed deployed constructions suffer severe security breaks when reused. Canetti et al. (Eurocrypt 2016) used computational security to sidestep this problem, building a computationally secure reusable fuzzy extractor that corrects a sublinear fraction of errors. We introduce a generic approach to constructing reusable fuzzy extractors. We define a new primitive called a reusable pseudoentropic isometry that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces. A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key. We propose reusable pseudoentropic isometries for the set difference and Hamming metrics. The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008). For the Hamming metric, we show that the second construction of Canetti et al.(Eurocrypt 2016) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry's reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet. Our constructions yield the first reusable fuzzy extractors that correct a constant fraction of errors. We also implement our set difference solution and describe two use cases.

Caminha, J., Perkusich, A., Perkusich, M..  2018.  A smart middleware to detect on-off trust attacks in the Internet of Things. 2018 IEEE International Conference on Consumer Electronics (ICCE). :1–2.

Security is a key concern in Internet of Things (IoT) designs. In a heterogeneous and complex environment, service providers and service requesters must trust each other. On-off attack is a sophisticated trust threat in which a malicious device can perform good and bad services randomly to avoid being rated as a low trust node. Some countermeasures demands prior level of trust knowing and time to classify a node behavior. In this paper, we introduce a Smart Middleware that automatically assesses the IoT resources trust, evaluating service providers attributes to protect against On-off attacks.

2019-03-28
Fernández, Maribel, Jaimunk, Jenjira, Thuraisingham, Bhavani.  2018.  Graph-Based Data-Collection Policies for the Internet of Things. Proceedings of the 4th Annual Industrial Control System Security Workshop. :9-16.

Smart industrial control systems (e.g., smart grid, oil and gas systems, transportation systems) are connected to the internet, and have the capability to collect and transmit data; as such, they are part of the IoT. The data collected can be used to improve services; however, there are serious privacy risks. This concern is usually addressed by means of privacy policies, but it is often difficult to understand the scope and consequences of such policies. Better tools to visualise and analyse data collection policies are needed. Graph-based modelling tools have been used to analyse complex systems in other domains. In this paper, we apply this technique to IoT data-collection policy analysis and visualisation. We describe graphical representations of category-based data collection policies and show that a graph-based policy language is a powerful tool not only to specify and visualise the policy, but also to analyse policy properties. We illustrate the approach with a simple example in the context of a chemical plant with a truck monitoring system. We also consider policy administration: we propose a classification of queries to help administrators analyse policies, and we show how the queries can be answered using our technique.

2019-03-06
Khalil, Issa M., Guan, Bei, Nabeel, Mohamed, Yu, Ting.  2018.  A Domain Is Only As Good As Its Buddies: Detecting Stealthy Malicious Domains via Graph Inference. Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy. :330-341.

Inference based techniques are one of the major approaches to analyze DNS data and detect malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new approach to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak "co-IP" relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed approach not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithms are specifically designed for DNS data analysis. They are effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improve the inference efficiency, we construct a new domain-IP graph that can work well with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only a minor impact to detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.

Wang, Jiawen, Wang, Wai Ming, Tian, Zonggui, Li, Zhi.  2018.  Classification of Multiple Affective Attributes of Customer Reviews: Using Classical Machine Learning and Deep Learning. Proceedings of the 2Nd International Conference on Computer Science and Application Engineering. :94:1-94:5.

Affective1 engineering is a methodology of designing products by collecting customer affective needs and translating them into product designs. It usually begins with questionnaire surveys to collect customer affective demands and responses. However, this process is expensive, which can only be conducted periodically in a small scale. With the rapid development of e-commerce, a larger number of customer product reviews are available on the Internet. Many studies have been done using opinion mining and sentiment analysis. However, the existing studies focus on the polarity classification from a single perspective (such as positive and negative). The classification of multiple affective attributes receives less attention. In this paper, 3-class classifications of four different affective attributes (i.e. Soft-Hard, Appealing-Unappealing, Handy-Bulky, and Reliable-Shoddy) are performed by using two classical machine learning algorithms (i.e. Softmax regression and Support Vector Machine) and two deep learning methods (i.e. Restricted Boltzmann machines and Deep Belief Network) on an Amazon dataset. The results show that the accuracy of deep learning methods is above 90%, while the accuracy of classical machine learning methods is about 64%. This indicates that deep learning methods are significantly better than classical machine learning methods.

Zong, Fang, Yong, Ouyang, Gang, Liu.  2018.  3D Modeling Method Based on Deep Belief Networks (DBNs) and Interactive Evolutionary Algorithm (IEA). Proceedings of the 2018 International Conference on Big Data and Computing. :124-128.

3D modeling usually refers to be the use of 3D software to build production through the virtual 3D space model with 3D data. At present, most 3D modeling software such as 3dmax, FLAC3D and Midas all need adjust models to get a satisfactory model or by coding a precise modeling. There are many matters such as complicated steps, strong profession, the high modeling cost. Aiming at this problem, the paper presents a new 3D modeling methods which is based on Deep Belief Networks (DBN) and Interactive Evolutionary Algorithm (IEA). Following this method, firstly, extract characteristic vectors from vertex, normal, surfaces of the imported model samples. Secondly, use the evolution strategy, to extract feature vector for stochastic evolution by artificial grading control the direction of evolution, and in the process to extract the characteristics of user preferences. Then, use evolution function matrix to establish the fitness approximation evaluation model, and simulate subjective evaluation. Lastly, the user can control the whole machine simulation evaluation process at any time, and get a satisfactory model. The experimental results show that the method in this paper is feasible.

Viet, Hung Nguyen, Van, Quan Nguyen, Trang, Linh Le Thi, Nathan, Shone.  2018.  Using Deep Learning Model for Network Scanning Detection. Proceedings of the 4th International Conference on Frontiers of Educational Technologies. :117-121.

In recent years, new and devastating cyber attacks amplify the need for robust cybersecurity practices. Preventing novel cyber attacks requires the invention of Intrusion Detection Systems (IDSs), which can identify previously unseen attacks. Many researchers have attempted to produce anomaly - based IDSs, however they are not yet able to detect malicious network traffic consistently enough to warrant implementation in real networks. Obviously, it remains a challenge for the security community to produce IDSs that are suitable for implementation in the real world. In this paper, we propose a new approach using a Deep Belief Network with a combination of supervised and unsupervised machine learning methods for port scanning attacks detection - the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. Our proposed approach will be tested with network security datasets and compared with previously existing methods.

Xing, Z., Liu, L., Li, S., Liu, Y..  2018.  Analysis of Radiation Effects for Monitoring Circuit Based on Deep Belief Network and Support Vector Method. 2018 Prognostics and System Health Management Conference (PHM-Chongqing). :511-516.

The monitoring circuit is widely applied in radiation environment and it is of significance to study the circuit reliability with the radiation effects. In this paper, an intelligent analysis method based on Deep Belief Network (DBN) and Support Vector Method is proposed according to the radiation experiments analysis of the monitoring circuit. The Total Ionizing Dose (TID) of the monitoring circuit is used to identify the circuit degradation trend. Firstly, the output waveforms of the monitoring circuit are obtained by radiating with the different TID. Subsequently, the Deep Belief Network Model is trained to extract the features of the circuit signal. Finally, the Support Vector Machine (SVM) and Support Vector Regression (SVR) are applied to classify and predict the remaining useful life (RUL) of the monitoring circuit. According to the experimental results, the performance of DBN-SVM exceeds DBN method for feature extraction and classification, and SVR is effective for predicting the degradation.

Li, W., Li, S., Zhang, X., Pan, Q..  2018.  Optimization Algorithm Research of Logistics Distribution Path Based on the Deep Belief Network. 2018 17th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES). :60-63.

Aiming at the phenomenon that the urban traffic is complex at present, the optimization algorithm of the traditional logistic distribution path isn't sensitive to the change of road condition without strong application in the actual logistics distribution, the optimization algorithm research of logistics distribution path based on the deep belief network is raised. Firstly, build the traffic forecast model based on the deep belief network, complete the model training and conduct the verification by learning lots of traffic data. On such basis, combine the predicated road condition with the traffic network to build the time-share traffic network, amend the access set and the pheromone variable of ant algorithm in accordance with the time-share traffic network, and raise the optimization algorithm of logistics distribution path based on the traffic forecasting. Finally, verify the superiority and application value of the algorithm in the actual distribution through the optimization algorithm contrast test with other logistics distribution paths.

Lin, Y., Liu, H., Xie, G., Zhang, Y..  2018.  Time Series Forecasting by Evolving Deep Belief Network with Negative Correlation Search. 2018 Chinese Automation Congress (CAC). :3839-3843.

The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.

Man, Y., Ding, L., Xiaoguo, Z..  2018.  Nonlinear System Identification Method Based on Improved Deep Belief Network. 2018 Chinese Automation Congress (CAC). :2379-2383.

Accurate model is very important for the control of nonlinear system. The traditional identification method based on shallow BP network is easy to fall into local optimal solution. In this paper, a modeling method for nonlinear system based on improved Deep Belief Network (DBN) is proposed. Continuous Restricted Boltzmann Machine (CRBM) is used as the first layer of the DBN, so that the network can more effectively deal with the actual data collected from the real systems. Then, the unsupervised training and supervised tuning were combine to improve the accuracy of identification. The simulation results show that the proposed method has a higher identification accuracy. Finally, this improved algorithm is applied to identification of diameter model of silicon single crystal and the simulation results prove its excellent ability of parameters identification.

Liu, Y., Wang, Y., Lombardi, F., Han, J..  2018.  An Energy-Efficient Stochastic Computational Deep Belief Network. 2018 Design, Automation Test in Europe Conference Exhibition (DATE). :1175-1178.

Deep neural networks (DNNs) are effective machine learning models to solve a large class of recognition problems, including the classification of nonlinearly separable patterns. The applications of DNNs are, however, limited by the large size and high energy consumption of the networks. Recently, stochastic computation (SC) has been considered to implement DNNs to reduce the hardware cost. However, it requires a large number of random number generators (RNGs) that lower the energy efficiency of the network. To overcome these limitations, we propose the design of an energy-efficient deep belief network (DBN) based on stochastic computation. An approximate SC activation unit (A-SCAU) is designed to implement different types of activation functions in the neurons. The A-SCAU is immune to signal correlations, so the RNGs can be shared among all neurons in the same layer with no accuracy loss. The area and energy of the proposed design are 5.27% and 3.31% (or 26.55% and 29.89%) of a 32-bit floating-point (or an 8-bit fixed-point) implementation. It is shown that the proposed SC-DBN design achieves a higher classification accuracy compared to the fixed-point implementation. The accuracy is only lower by 0.12% than the floating-point design at a similar computation speed, but with a significantly lower energy consumption.

2019-02-18
Fukushima, Keishiro, Nakamura, Toru, Ikeda, Daisuke, Kiyomoto, Shinsaku.  2018.  Challenges in Classifying Privacy Policies by Machine Learning with Word-based Features. Proceedings of the 2Nd International Conference on Cryptography, Security and Privacy. :62–66.

In this paper, we discuss challenges when we try to automatically classify privacy policies using machine learning with words as the features. Since it is difficult for general public to understand privacy policies, it is necessary to support them to do that. To this end, the authors believe that machine learning is one of the promising ways because users can grasp the meaning of policies through outputs by a machine learning algorithm. Our final goal is to develop a system which automatically translates privacy policies into privacy labels [1]. Toward this goal, we classify sentences in privacy policies with category labels, using popular machine learning algorithms, such as a naive Bayes classifier.We choose these algorithms because we could use trained classifiers to evaluate keywords appropriate for privacy labels. Therefore, we adopt words as the features of those algorithms. Experimental results show about 85% accuracy. We think that much higher accuracy is necessary to achieve our final goal. By changing learning settings, we identified one reason of low accuracies such that privacy policies include many sentences which are not direct description of information about categories. It seems that such sentences are redundant but maybe they are essential in case of legal documents in order to prevent misinterpreting. Thus, it is important for machine learning algorithms to handle these redundant sentences appropriately.

2019-02-13
Phuong, T. V. Xuan, Ning, R., Xin, C., Wu, H..  2018.  Puncturable Attribute-Based Encryption for Secure Data Delivery in Internet of Things. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications. :1511–1519.
While the Internet of Things (IoT) is embraced as important tools for efficiency and productivity, it is becoming an increasingly attractive target for cybercriminals. This work represents the first endeavor to develop practical Puncturable Attribute Based Encryption schemes that are light-weight and applicable in IoTs. In the proposed scheme, the attribute-based encryption is adopted for fine grained access control. The secret keys are puncturable to revoke the decryption capability for selected messages, recipients, or time periods, thus protecting selected important messages even if the current key is compromised. In contrast to conventional forward encryption, a distinguishing merit of the proposed approach is that the recipients can update their keys by themselves without key re-issuing from the key distributor. It does not require frequent communications between IoT devices and the key distribution center, neither does it need deleting components to expunge existing keys to produce a new key. Moreover, we devise a novel approach which efficiently integrates attribute-based key and punctured keys such that the key size is roughly the same as that of the original attribute-based encryption. We prove the correctness of the proposed scheme and its security under the Decisional Bilinear Diffie-Hellman (DBDH) assumption. We also implement the proposed scheme on Raspberry Pi and observe that the computation efficiency of the proposed approach is comparable to the original attribute-based encryption. Both encryption and decryption can be completed within tens of milliseconds.
Joshi, M., Joshi, K., Finin, T..  2018.  Attribute Based Encryption for Secure Access to Cloud Based EHR Systems. 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). :932–935.
Medical organizations find it challenging to adopt cloud-based electronic medical records services, due to the risk of data breaches and the resulting compromise of patient data. Existing authorization models follow a patient centric approach for EHR management where the responsibility of authorizing data access is handled at the patients' end. This however creates a significant overhead for the patient who has to authorize every access of their health record. This is not practical given the multiple personnel involved in providing care and that at times the patient may not be in a state to provide this authorization. Hence there is a need of developing a proper authorization delegation mechanism for safe, secure and easy cloud-based EHR management. We have developed a novel, centralized, attribute based authorization mechanism that uses Attribute Based Encryption (ABE) and allows for delegated secure access of patient records. This mechanism transfers the service management overhead from the patient to the medical organization and allows easy delegation of cloud-based EHR's access authority to the medical providers. In this paper, we describe this novel ABE approach as well as the prototype system that we have created to illustrate it.
Yasumura, Y., Imabayashi, H., Yamana, H..  2018.  Attribute-based proxy re-encryption method for revocation in cloud storage: Reduction of communication cost at re-encryption. 2018 IEEE 3rd International Conference on Big Data Analysis (ICBDA). :312–318.
In recent years, many users have uploaded data to the cloud for easy storage and sharing with other users. At the same time, security and privacy concerns for the data are growing. Attribute-based encryption (ABE) enables both data security and access control by defining users with attributes so that only those users who have matching attributes can decrypt them. For real-world applications of ABE, revocation of users or their attributes is necessary so that revoked users can no longer decrypt the data. In actual implementations, ABE is used in hybrid with a symmetric encryption scheme such as the advanced encryption standard (AES) where data is encrypted with AES and the AES key is encrypted with ABE. The hybrid encryption scheme requires re-encryption of the data upon revocation to ensure that the revoked users can no longer decrypt that data. To re-encrypt the data, the data owner (DO) must download the data from the cloud, then decrypt, encrypt, and upload the data back to the cloud, resulting in both huge communication costs and computational burden on the DO depending on the size of the data to be re-encrypted. In this paper, we propose an attribute-based proxy re-encryption method in which data can be re-encrypted in the cloud without downloading any data by adopting both ABE and Syalim's encryption scheme. Our proposed scheme reduces the communication cost between the DO and cloud storage. Experimental results show that the proposed method reduces the communication cost by as much as one quarter compared to that of the trivial solution.
Gunjal, Y. S., Gunjal, M. S., Tambe, A. R..  2018.  Hybrid Attribute Based Encryption and Customizable Authorization in Cloud Computing. 2018 International Conference On Advances in Communication and Computing Technology (ICACCT). :187–190.
Most centralized systems allow data access to its cloud user if a cloud user has a certain set of satisfying attributes. Presently, one method to compete such policies is to use an authorized cloud server to maintain the user data and have access control over it. At times, when one of the servers keeping data is compromised, the security of the user data is compromised. For getting access control, maintaining data security and obtaining precise computing results, the data owners have to keep attribute-based security to encrypt the stored data. During the delegation of data on cloud, the cloud servers may be tampered by the counterfeit cipher-text. Furthermore, the authorized users may be cheated by retorting them that they are unauthorized. Largely the encryption control access attribute policies are complex. In this paper, we present Cipher-text Policy Attribute-Based Encryption for maintaining complex access control over encrypted data with verifiable customizable authorization. The proposed technique provides data confidentiality to the encrypted data even if the storage server is comprised. Moreover, our method is highly secured against collusion attacks. In advance, performance evaluation of the proposed system is elaborated with implementation of the same.
Servos, Daniel, Osborn, Sylvia L..  2018.  HGAA: An Architecture to Support Hierarchical Group and Attribute-Based Access Control. Proceedings of the Third ACM Workshop on Attribute-Based Access Control. :1–12.
Attribute-Based Access Control (ABAC), a promising alternative to traditional models of access control, has gained significant attention in recent academic literature. This attention has lead to the creation of a number of ABAC models including our previous contribution, Hierarchical Group and Attribute-Based Access Control (HGABAC). However, to date few complete solutions exist that provide both an ABAC model and architecture that could be implemented in real life scenarios. This work aims to advance progress towards a complete ABAC solution by introducing Hierarchical Group Attribute Architecture (HGAA), an architecture to support HGABAC and close the gap between a model and real world implementation. In addition to HGAA we also present an attribute certificate specification that enables users to provide proof of attribute ownership in a pseudonymous and off-line manner, as well as an update to the Hierarchical Group Policy Language (HGPL) to support our namespace for uniquely identifying attributes across disparate security domains. Details of our HGAA implementation are given and a preliminary analysis of its performance is discussed as well as directions for future work.
Sepehri, Masoomeh, Trombetta, Alberto, Sepehri, Maryam, Damiani, Ernesto.  2018.  An Efficient Cryptography-Based Access Control Using Inner-Product Proxy Re-Encryption Scheme. Proceedings of the 13th International Conference on Availability, Reliability and Security. :12:1–12:10.
Inner-product encryption (IPE) is a well-known functional encryption primitive that allows decryption when the inner-product of the attribute vectors, upon which the encrypted data and the decryption key depend, is equal to zero. Using IPE, it is possible to define fine-grained access policies over encrypted data whose enforcement can be outsourced to the cloud where the data are stored. However, current IPE schemes do not support efficient access policy changes. In this paper, we propose an efficient inner-product proxy re-encryption (E-IPPRE) scheme that provides the proxy server with a transformation key, with which a ciphertext associated with an attribute vector can be transformed to a new ciphertext associated with a different attribute vector, providing a policy update mechanism with a performance suitable for many practical applications. We experimentally assess the efficiency of our protocol and show that it is selective attribute-secure against chosen-plaintext attacks in the standard model under the Asymmetric Decisional Bilinear Diffie-Hellman assumption.
Gür, Kamil Doruk, Polyakov, Yuriy, Rohloff, Kurt, Ryan, Gerard W., Savas, Erkay.  2018.  Implementation and Evaluation of Improved Gaussian Sampling for Lattice Trapdoors. Proceedings of the 6th Workshop on Encrypted Computing & Applied Homomorphic Cryptography. :61–71.

We report on our implementation of a new Gaussian sampling algorithm for lattice trapdoors. Lattice trapdoors are used in a wide array of lattice-based cryptographic schemes including digital signatures, attributed-based encryption, program obfuscation and others. Our implementation provides Gaussian sampling for trapdoor lattices with prime moduli, and supports both single- and multi-threaded execution. We experimentally evaluate our implementation through its use in the GPV hash-and-sign digital signature scheme as a benchmark. We compare our design and implementation with prior work reported in the literature. The evaluation shows that our implementation 1) has smaller space requirements and faster runtime, 2) does not require multi-precision floating-point arithmetic, and 3) can be used for a broader range of cryptographic primitives than previous implementations.