Biblio
Cross-site scripting (XSS) is an often-occurring major attack that developers should consider when developing web applications. We develop a system that can provide practical exercises for learning how to create web applications that are secure against XSS. Our system utilizes free software and virtual machines, allowing low-cost, safe, and practical exercises. By using two virtual machines as the web server and the attacker host, the learner can conduct exercises demonstrating both XSS countermeasures and XSS attacks. In our system, learners use a web browser to learn and perform exercises related to XSS. Experimental evaluations confirm that the proposed system can support learning of XSS countermeasures.
Network covert timing channel(NCTC) is a process of transmitting hidden information by means of inter-packet delay (IPD) of legitimate network traffic. Their ability to evade traditional security policies makes NCTCs a grave security concern. However, a robust method that can be used to detect a large number of NCTCs is missing. In this paper, a NCTC detection method based on chaos theory and threshold secret sharing is proposed. Our method uses chaos theory to reconstruct a high-dimensional phase space from one-dimensional time series and extract the unique and stable channel traits. Then, a channel identifier is constructed using the secret reconstruction strategy from threshold secret sharing to realize the mapping of the channel features to channel identifiers. Experimental results show that the approach can detect varieties of NCTCs with a guaranteed true positive rate and greatly improve the versatility and robustness.
New technologies, such as augmented reality (AR) are used to enhance human capabilities and extend human functioning; nevertheless they may cause distraction and incorrect human functioning. Systems including socio entities (such as human) and technical entities (such as augmented reality) are called socio-technical systems. In order to do risk assessment in such systems, considering new dependability threats caused by augmented reality is essential, for example failure of an extended human function is a new type of dependability threat introduced to the system because of new technologies. In particular, it is required to identify these new dependability threats and extend modeling and analyzing techniques to be able to uncover their potential impacts. This research aims at providing a framework for risk assessment in AR-equipped socio-technical systems by identifying AR-extended human failures and AR-caused faults leading to human failures. Our work also extends modeling elements in an existing metamodel for modeling socio-technical systems, to enable AR-relevant dependability threats modeling. This extended metamodel is expected to be used for extending analysis techniques to analyze AR-equipped socio-technical systems.
Aiming at the problem that the traditional intrusion detection method can not effectively deal with the massive and high-dimensional network traffic data of industrial control system (ICS), an ICS intrusion detection strategy based on bidirectional generative adversarial network (BiGAN) is proposed in this paper. In order to improve the applicability of BiGAN model in ICS intrusion detection, the optimal model was obtained through the single variable principle and cross-validation. On this basis, the supervised control and data acquisition (SCADA) standard data set is used for comparative experiments to verify the performance of the optimized model on ICS intrusion detection. The results show that the ICS intrusion detection method based on optimized BiGAN has higher accuracy and shorter detection time than other methods.
Cyber threats directly affect the critical reliability and availability of modern Industry Control Systems (ICS) in respects of operations and processes. Where there are a variety of vulnerabilities and cyber threats, it is necessary to effectively evaluate cyber security risks, and control uncertainties of cyber environments, and quantitative evaluation can be helpful. To effectively and timely control the spread and impact produced by attacks on ICS networks, a probabilistic Multi-Attribute Vulnerability Criticality Analysis (MAVCA) model for impact estimation and prioritised remediation is presented. This offer a new approach for combining three major attributes: vulnerability severities influenced by environmental factors, the attack probabilities relative to the vulnerabilities, and functional dependencies attributed to vulnerability host components. A miniature ICS testbed evaluation illustrates the usability of the model for determining the weakest link and setting security priority in the ICS. This work can help create speedy and proactive security response. The metrics derived in this work can serve as sub-metrics inputs to a larger quantitative security metrics taxonomy; and can be integrated into the security risk assessment scheme of a larger distributed system.
The existing anonymized differential privacy model adopts a unified anonymity method, ignoring the difference of personal privacy, which may lead to the problem of excessive or insufficient protection of the original data [1]. Therefore, this paper proposes a personalized k-anonymity model for tuples (PKA) and proposes a differential privacy data publishing algorithm (DPPA) based on personalized anonymity, firstly based on the tuple personality factor set by the user in the original data set. The values are classified and the corresponding privacy protection relevance is calculated. Then according to the tuple personality factor classification value, the data set is clustered by clustering method with different anonymity, and the quasi-identifier attribute of each cluster is aggregated and noise-added to realize anonymized differential privacy; finally merge the subset to get the data set that meets the release requirements. In this paper, the correctness of the algorithm is analyzed theoretically, and the feasibility and effectiveness of the proposed algorithm are verified by comparison with similar algorithms.
The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.
Enterprises round the globe have been searching for a way to securely empower AndroidTM devices for work but have spurned away from the Android platform due to ongoing fragmentation and security concerns. Discrepant vulnerabilities have been reported in Android smartphones since Android Lollipop release. Smartphones can be easily hacked by installing a malicious application, visiting an infectious browser, receiving a crafted MMS, interplaying with plug-ins, certificate forging, checksum collisions, inter-process communication (IPC) abuse and much more. To highlight this issue a manual analysis of Android vulnerabilities is performed, by using data available in National Vulnerability Database NVD and Android Vulnerability website. This paper includes the vulnerabilities that risked the dual persona support in Android 5 and above, till Dec 2017. In our security threat analysis, we have identified a comprehensive list of Android vulnerabilities, vulnerable Android versions, manufacturers, and information regarding complete and partial patches released. So far, there is no published research work that systematically presents all the vulnerabilities and vulnerability assessment for dual persona feature of Android's smartphone. The data provided in this paper will open ways to future research and present a better Android security model for dual persona.
Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access. Such attacks are typically carried out by querying the target model using inputs that are synthetically generated or sampled from a surrogate dataset to construct a labeled dataset. The adversary can use this labeled dataset to train a clone model, which achieves a classification accuracy comparable to that of the target model. We propose "Adaptive Misinformation" to defend against such model stealing attacks. We identify that all existing model stealing attacks invariably query the target model with Out-Of-Distribution (OOD) inputs. By selectively sending incorrect predictions for OOD queries, our defense substantially degrades the accuracy of the attacker's clone model (by up to 40%), while minimally impacting the accuracy (\textbackslashtextless; 0.5%) for benign users. Compared to existing defenses, our defense has a significantly better security vs accuracy trade-off and incurs minimal computational overhead.
How does information regarding an adversary's intentions affect optimal system design? This paper addresses this question in the context of graphical coordination games where an adversary can indirectly influence the behavior of agents by modifying their payoffs. We study a situation in which a system operator must select a graph topology in anticipation of the action of an unknown adversary. The designer can limit her worst-case losses by playing a security strategy, effectively planning for an adversary which intends maximum harm. However, fine-grained information regarding the adversary's intention may help the system operator to fine-tune the defenses and obtain better system performance. In a simple model of adversarial behavior, this paper asks how much a system operator can gain by fine-tuning a defense for known adversarial intent. We find that if the adversary is weak, a security strategy is approximately optimal for any adversary type; however, for moderately-strong adversaries, security strategies are far from optimal.
Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.
Rapidly growing shared information for threat intelligence not only helps security analysts reduce time on tracking attacks, but also bring possibilities to research on adversaries' thinking and decisions, which is important for the further analysis of attackers' habits and preferences. In this paper, we analyze current models and frameworks used in threat intelligence that suited to different modeling goals, and propose a three-layer model (Goal, Behavior, Capability) to study the statistical characteristics of APT groups. Based on the proposed model, we construct a knowledge network composed of adversary behaviors, and introduce a similarity measure approach to capture similarity degree by considering different semantic links between groups. After calculating similarity degrees, we take advantage of Girvan-Newman algorithm to discover community groups, clustering result shows that community structures and boundaries do exist by analyzing the behavior of APT groups.