Biblio

Filters: Author is Dai, H.  [Clear All Filters]
2021-03-09
Rahmati, A., Moosavi-Dezfooli, S.-M., Frossard, P., Dai, H..  2020.  GeoDA: A Geometric Framework for Black-Box Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8443–8452.
Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-1 label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small p norms which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for p=2, we theoretically show that our algorithm actually converges to the minimal perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries.
2021-04-08
Jin, R., He, X., Dai, H..  2019.  On the Security-Privacy Tradeoff in Collaborative Security: A Quantitative Information Flow Game Perspective. IEEE Transactions on Information Forensics and Security. 14:3273–3286.
To contest the rapidly developing cyber-attacks, numerous collaborative security schemes, in which multiple security entities can exchange their observations and other relevant data to achieve more effective security decisions, are proposed and developed in the literature. However, the security-related information shared among the security entities may contain some sensitive information and such information exchange can raise privacy concerns, especially when these entities belong to different organizations. With such consideration, the interplay between the attacker and the collaborative entities is formulated as Quantitative Information Flow (QIF) games, in which the QIF theory is adapted to measure the collaboration gain and the privacy loss of the entities in the information sharing process. In particular, three games are considered, each corresponding to one possible scenario of interest in practice. Based on the game-theoretic analysis, the expected behaviors of both the attacker and the security entities are obtained. In addition, the simulation results are presented to validate the analysis.
2018-04-02
He, X., Islam, M. M., Jin, R., Dai, H..  2017.  Foresighted Deception in Dynamic Security Games. 2017 IEEE International Conference on Communications (ICC). :1–6.

Deception has been widely considered in literature as an effective means of enhancing security protection when the defender holds some private information about the ongoing rivalry unknown to the attacker. However, most of the existing works on deception assume static environments and thus consider only myopic deception, while practical security games between the defender and the attacker may happen in dynamic scenarios. To better exploit the defender's private information in dynamic environments and improve security performance, a stochastic deception game (SDG) framework is developed in this work to enable the defender to conduct foresighted deception. To solve the proposed SDG, a new iterative algorithm that is provably convergent is developed. A corresponding learning algorithm is developed as well to facilitate the defender in conducting foresighted deception in unknown dynamic environments. Numerical results show that the proposed foresighted deception can offer a substantial performance improvement as compared to the conventional myopic deception.

2018-05-09
Jin, R., He, X., Dai, H., Dutta, R., Ning, P..  2017.  Towards Privacy-Aware Collaborative Security: A Game-Theoretic Approach. 2017 IEEE Symposium on Privacy-Aware Computing (PAC). :72–83.

With the rapid development of sophisticated attack techniques, individual security systems that base all of their decisions and actions of attack prevention and response on their own observations and knowledge become incompetent. To cope with this problem, collaborative security in which a set of security entities are coordinated to perform specific security actions is proposed in literature. In collaborative security schemes, multiple entities collaborate with each other by sharing threat evidence or analytics to make more effective decisions. Nevertheless, the anticipated information exchange raises privacy concerns, especially for those privacy-sensitive entities. In order to obtain a quantitative understanding of the fundamental tradeoff between the effectiveness of collaboration and the entities' privacy, a repeated two-layer single-leader multi-follower game is proposed in this work. Based on our game-theoretic analysis, the expected behaviors of both the attacker and the security entities are derived and the utility-privacy tradeoff curve is obtained. In addition, the existence of Nash equilibrium (NE) for the collaborative entities is proven, and an asynchronous dynamic update algorithm is proposed to compute the optimal collaboration strategies of the entities. Furthermore, the existence of Byzantine entities is considered and its influence is investigated. Finally, simulation results are presented to validate the analysis.

2018-02-06
Dai, H., Zhu, X., Yang, G., Yi, X..  2017.  A Verifiable Single Keyword Top-k Search Scheme against Insider Attacks over Cloud Data. 2017 3rd International Conference on Big Data Computing and Communications (BIGCOM). :111–116.

With the development of cloud computing and its economic benefit, more and more companies and individuals outsource their data and computation to clouds. Meanwhile, the business way of resource outsourcing makes the data out of control from its owner and results in many security issues. The existing secure keyword search methods assume that cloud servers are curious-but-honest or partial honest, which makes them powerless to deal with the deliberately falsified or fabricated results of insider attacks. In this paper, we propose a verifiable single keyword top-k search scheme against insider attacks which can verify the integrity of search results. Data owners generate verification codes (VCs) for the corresponding files, which embed the ordered sequence information of the relevance scores between files and keywords. Then files and corresponding VCs are outsourced to cloud servers. When a data user performs a keyword search in cloud servers, the qualified result files are determined according to the relevance scores between the files and the interested keyword and then returned to the data user together with a VC. The integrity of the result files is verified by data users through reconstructing a new VC on the received files and comparing it with the received one. Performance evaluation have been conducted to demonstrate the efficiency and result redundancy of the proposed scheme.

2018-01-16
Huang, C., Hou, C., He, L., Dai, H., Ding, Y..  2017.  Policy-Customized: A New Abstraction for Building Security as a Service. 2017 14th International Symposium on Pervasive Systems, Algorithms and Networks 2017 11th International Conference on Frontier of Computer Science and Technology 2017 Third International Symposium of Creative Computing (ISPAN-FCST-ISCC). :203–210.

Just as cloud customers have different performance requirements, they also have different security requirements for their computations in the cloud. Researchers have suggested a "security on demand" service model for cloud computing, where secure computing environment are dynamically provisioned to cloud customers according to their specific security needs. The availability of secure computing platforms is a necessary but not a sufficient solution to convince cloud customers to move their sensitive data and code to the cloud. Cloud customers need further assurance to convince them that the security measures are indeed deployed, and are working correctly. In this paper, we present Policy-Customized Trusted Cloud Service architecture with a new remote attestation scheme and a virtual machine migration protocol, where cloud customer can custom security policy of computing environment and validate whether the current computing environment meets the security policy in the whole life cycle of the virtual machine. To prove the availability of proposed architecture, we realize a prototype that support customer-customized security policy and a VM migration protocol that support customer-customized migration policy and validation based on open source Xen Hypervisor.