Biblio

Filters: Author is Cheng, H.  [Clear All Filters]
2021-04-27
Yang, Y., Lu, K., Cheng, H., Fu, M., Li, Z..  2020.  Time-controlled Regular Language Search over Encrypted Big Data. 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). 9:1041—1045.

The rapid development of cloud computing and the arrival of the big data era make the relationship between users and cloud closer. Cloud computing has powerful data computing and data storage capabilities, which can ubiquitously provide users with resources. However, users do not fully trust the cloud server's storage services, so lots of data is encrypted and uploaded to the cloud. Searchable encryption can protect the confidentiality of data and provide encrypted data retrieval functions. In this paper, we propose a time-controlled searchable encryption scheme with regular language over encrypted big data, which provides flexible search pattern and convenient data sharing. Our solution allows users with data's secret keys to generate trapdoors by themselves. And users without data's secret keys can generate trapdoors with the help of a trusted third party without revealing the data owner's secret key. Our system uses a time-controlled mechanism to collect keywords queried by users and ensures that the querying user's identity is not directly exposed. The obtained keywords are the basis for subsequent big data analysis. We conducted a security analysis of the proposed scheme and proved that the scheme is secure. The simulation experiment and comparison of our scheme show that the system has feasible efficiency.

2019-01-21
Isakov, M., Bu, L., Cheng, H., Kinsy, M. A..  2018.  Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators. 2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :62–67.

Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This provides an incentive for adversaries to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.