Biblio
We use model-based testing techniques to detect logical vulnerabilities in implementations of the Wi-Fi handshake. This reveals new fingerprinting techniques, multiple downgrade attacks, and Denial of Service (DoS) vulnerabilities. Stations use the Wi-Fi handshake to securely connect with wireless networks. In this handshake, mutually supported capabilities are determined, and fresh pairwise keys are negotiated. As a result, a proper implementation of the Wi-Fi handshake is essential in protecting all subsequent traffic. To detect the presence of erroneous behaviour, we propose a model-based technique that generates a set of representative test cases. These tests cover all states of the Wi-Fi handshake, and explore various edge cases in each state. We then treat the implementation under test as a black box, and execute all generated tests. Determining whether a failed test introduces a security weakness is done manually. We tested 12 implementations using this approach, and discovered irregularities in all of them. Our findings include fingerprinting mechanisms, DoS attacks, and downgrade attacks where an adversary can force usage of the insecure WPA-TKIP cipher. Finally, we explain how one of our downgrade attacks highlights incorrect claims made in the 802.11 standard.
Educational games have a potentially significant role to play in the increasing efforts to expand access to computer science education. Computational thinking is an area of particular interest, including the development of problem-solving strategies like divide and conquer. Existing games designed to teach computational thinking generally consist of either open-ended exploration with little direct guidance or a linear series of puzzles with lots of direct guidance, but little exploration. Educational research indicates that the most effective approach may be a hybrid of these two structures. We present Dragon Architect, an educational computational thinking game, and use it as context for a discussion of key open problems in the design of games to teach computational thinking. These problems include how to directly teach computational thinking strategies, how to achieve a balance between exploration and direct guidance, and how to incorporate engaging social features. We also discuss several important design challenges we have encountered during the design of Dragon Architect. We contend the problems we describe are relevant to anyone making educational games or systems that need to teach complex concepts and skills.
This paper presents the preliminary framework proposed by the authors for drivers of Smart Governance. The research question of this study is: What are the drivers for Smart Governance to achieve evidence-based policy-making? The framework suggests that in order to create a smart governance model, data governance and collaborative governance are the main drivers. These pillars are supported by legal framework, normative factors, principles and values, methods, data assets or human resources, and IT infrastructure. These aspects will guide a real time evaluation process in all levels of the policy cycle, towards to the implementation of evidence-based policies.
Internet of Things (IoT) devices offer new sources of contextual information, which can be leveraged by applications to make smart decisions. However, due to the decentralized and heterogeneous nature of such devices - each only having a partial view of their surroundings - there is an inherent risk of uncertain, unreliable and inconsistent observations. This is a serious concern for applications making security related decisions, such as context-aware authentication. We propose and evaluate a middleware for IoT that provides trustworthy context for a collaborative authentication use case. It abstracts a dynamic and distributed fusion scheme that extends the Chair-Varshney (CV) optimal decision fusion rule such that it can be used in a highly dynamic IoT environment. We compare performance and cost trade-offs against regular CV. Experimental evaluation demonstrates that our solution outperforms CV with 10% in a highly dynamic IoT environments, with the ability to detect and mitigate unreliable sensors.
Scan-based test is commonly used to increase testability and fault coverage, however, it is also known to be a liability for chip security. Research has shown that intellectual property (IP) or secret keys can be leaked through scan-based attacks. In this paper, we propose a dynamically-obfuscated scan design for protecting IPs against scan-based attacks. By perturbing all test patterns/responses and protecting the obfuscation key, the proposed architecture is proven to be robust against existing non-invasive scan attacks, and can protect all scan data from attackers in foundry, assembly, and system developers (i.e., OEMs) without compromising the testability. Furthermore, the proposed architecture can be easily plugged into EDA generated scan chains without having a noticeable impact on conventional integrated circuit (IC) design, manufacturing, and test flow. Finally, detailed security and experimental analyses have been performed on several benchmarks. The results demonstrate that the proposed method can protect chips from existing brute force, differential, and other scan-based attacks that target the obfuscation key. The proposed design is of low overhead on area, power consumption, and pattern generation time, and there is no impact on test time.
This study is an empirical study of the factors affecting cyber security breach. Based on the previous researches, factors affecting information security were classified into three aspects of system management, policy and institution, awareness and culture, and measurement items were constructed for each. As a result, outsourcing of information security, data backup, information security awareness by top management and employees were significant influencing variables. The analysis results show that the elements of cognitive aspect are very important. This should be remembered in security that eventually the acceptance of the members who use the security system safely and observe the relevant rules is very important.
Extensible language frameworks aim to allow independently-developed language extensions to be easily added to a host programming language. It should not require being a compiler expert, and the resulting compiler should "just work" as expected. Previous work has shown how specifications for parsing (based on context free grammars) and for semantic analysis (based on attribute grammars) can be automatically and reliably composed, ensuring that the resulting compiler does not terminate abnormally. However, this work does not ensure that a property proven to hold for a language (or extended language) still holds when another extension is added, a problem we call interference. We present a solution to this problem using of a logical notion of coherence. We show that a useful class of language extensions, implemented as attribute grammars, preserve all coherent properties. If we also restrict extensions to only making use of coherent properties in establishing their correctness, then the correctness properties of each extension will hold when composed with other extensions. As a result, there can be no interference: each extension behaves as specified.
The display image on the visual display unit (VDU) can be retrieved from the radiated and conducted emission at some distance with no trace. In this paper, the maximum eavesdropping distance for the unintentional radiation and conduction electromagnetic (EM) signals which contain information has been estimated in theory by considering some realistic parameters. Firstly, the maximum eavesdropping distance for the unintentional EM radiation is estimated based on the reception capacity of a log-periodic antenna which connects to a receiver, the experiment data, the attenuation in free-space and the additional attenuation in the propagation path. And then, based on a multi-conductor transmission model and some experiment results, the maximum eavesdropping distance for the conducted emission is theoretically derived. The estimating results demonstrated that the ITE equipment may also exist threat of the information leakage even if it has met the current EMC requirements.
Here we explore the applicability of traditional sliding window based convolutional neural network (CNN) detection pipeline and region based object detection techniques such as Faster Region-based CNN (R-CNN) and Region-based Fully Convolutional Networks (R-FCN) on the problem of object detection in X-ray security imagery. Within this context, with limited dataset availability, we employ a transfer learning paradigm for network training tackling both single and multiple object detection problems over a number of R-CNN/R-FCN variants. The use of first-stage region proposal within the Faster RCNN and R-FCN provide superior results than traditional sliding window driven CNN (SWCNN) approach. With the use of Faster RCNN with VGG16, pretrained on the ImageNet dataset, we achieve 88.3 mAP for a six object class X-ray detection problem. The use of R-FCN with ResNet-101, yields 96.3 mAP for the two class firearm detection problem requiring 0.1 second computation per image. Overall we illustrate the comparative performance of these techniques as object localization strategies within cluttered X-ray security imagery.
Meta-programs are programs that generate other programs, but in weakly type-safe systems, type-checking a meta-program only establishes its own type safety, and generated programs need additional type-checking after generation. Strong type safety of a meta-program implies type safety of any generated object program, a property with important engineering benefits. Current strongly type-safe systems suffer from expressivity limitations and cannot support many meta-programs found in practice, for example automatic generation of lenses. To overcome this, we move away from the idea of staged meta-programming. Instead, we use an off-the-shelf dependently-typed language as the meta-language and a relatively standard, intrinsically well-typed representation of the object language. We scale this approach to practical meta-programming, by choosing a high-level, explicitly typed intermediate representation as the object language, rather than a surface programming language. We implement our approach as a library for the Glasgow Haskell Compiler (GHC) and evaluate it on several meta-programs, including a deriveLenses meta-program taken from a real-world Haskell lens library. Our evaluation demonstrates expressivity beyond the state of the art and applicability to real settings, at little cost in terms of code size.
Applications for data analysis of biomedical data are complex programs and often consist of multiple components. Re-usage of existing solutions from external code repositories or program libraries is common in algorithm development. To ease reproducibility as well as transfer of algorithms and required components into distributed infrastructures Linux containers are increasingly used in those environments, that are at least partly connected to the internet. However concerns about the untrusted application remain and are of high interest when medical data is processed. Additionally, the portability of the containers needs to be ensured by using only security technologies, that do not require additional kernel modules. In this paper we describe measures and a solution to secure the execution of an example biomedical application for normalization of multidimensional biosignal recordings. This application, the required runtime environment and the security mechanisms are installed in a Docker-based container. A fine-grained restricted environment (sandbox) for the execution of the application and the prevention of unwanted behaviour is created inside the container. The sandbox is based on the filtering of system calls, as they are required to interact with the operating system to access potentially restricted resources e.g. the filesystem or network. Due to the low-level character of system calls, the creation of an adequate rule set for the sandbox is challenging. Therefore the presented solution includes a monitoring component to collect required data for defining the rules for the application sandbox. Performance evaluation of the application execution shows no significant impact of the resulting sandbox, while detailed monitoring may increase runtime up to over 420%.
NoSQL databases have gained a lot of popularity over the last few years. They are now used in many new system implementations that work with vast amounts of data. This data will typically also include sensitive information that needs to be secured. NoSQL databases are also underlying a number of cloud implementations which are increasingly being used to store sensitive information by various organisations. This has made NoSQL databases a new target for hackers and other state sponsored actors. Forensic examinations of compromised systems will need to be conducted to determine what exactly transpired and who was responsible. This paper examines specifically if NoSQL databases have security features that leave relevant traces so that accurate forensic attribution can be conducted. The seeming lack of default security measures such as access control and logging has prompted this examination. A survey into the top ranked NoSQL databases was conducted to establish what authentication and authorisation features are available. Additionally the provided logging mechanisms were also examined since access control without any auditing would not aid forensic attribution tremendously. Some of the surveyed NoSQL databases do not provide adequate access control mechanisms and logging features that leave relevant traces to allow forensic attribution to be done using those. The other surveyed NoSQL databases did provide adequate mechanisms and logging traces for forensic attribution, but they are not enabled or configured by default. This means that in many cases they might not be available, leading to insufficient information to perform accurate forensic attribution even on those databases.
The challenge of maintaining confidentiality of stored and processed data in a remote database or cloud is quite urgent. Using homomorphic encryption may solve the problem, because it allows to compute some functions over encrypted data without preliminary deciphering of data. Fully homomorphic encryption schemes have a number of limitations such as accumulation of noise and increase of ciphertext extension during performing operations, the range of operations is limited. Nowadays a lot of homomorphic encryption schemes and their modifications have been investigated, so more than 25 reports on homomorphic encryption schemes have already been published on Cryptology ePrint Archive for 2016. We propose an overview of current Fully Homomorphic Encryption Schemes and analyze specific operations for databases which homomorphic cryptosystems allow to perform. We also investigate the possibility of sorting over encrypted data and present our approach to compare data encrypted by Multi-bit FHE scheme.
Recommender system is to suggest items that might be interest of the users in social networks. Collaborative filtering is an approach that works based on similarity and recommends items liked by other similar users. Trust model adopts users' trust network in place of similarity. Multi-faceted trust model considers multiple and heterogeneous trust relationship among the users and recommend items based on rating exist in the network of trustees of a specific facet. This paper applies genetic algorithm to estimate parameters of multi-faceted trust model, in which the trust weights are calculated based on the ratings and the trust network for each facet, separately. The model was built on Epinions data set that includes consumers' opinion, rating for items and the web of trust network. It was used to predict users' rating for items in different facets and root mean squared of prediction error (RMSE) was considered as a measure of performance. Empirical evaluations demonstrated that multi-facet models improve performance of the recommender system.
Privacy preserving on data publication has been an important research field over the past few decades. One of the fundamental challenges in privacy preserving data publication is the trade-off problem between privacy and utility of the single and independent data set. However, recent research works have shown that the advanced privacy mechanism, i.e., differential privacy, is vulnerable when multiple data sets are correlated. In this case, the trade-off problem between privacy and utility is evolved into a game problem, in which the payoff of each player is dependent not only on his privacy parameter, but also on his neighbors' privacy parameters. In this paper, we firstly present the definition of correlated differential privacy to evaluate the real privacy level of a single data set influenced by the other data sets. Then, we construct a game model of multiple players, who each publishes the data set sanitized by differential privacy. Next, we analyze the existence and uniqueness of the pure Nash Equilibrium and demonstrate the sufficient conditions in the game. Finally, we refer to a notion, i.e., the price of anarchy, to evaluate efficiency of the pure Nash Equilibrium.
Mobile Ad Hoc Network (MANET) technology provides intercommunication between different nodes where no infrastructure is available for communication. MANET is attracting many researcher attentions as it is cost effective and easy for implementation. Main challenging aspect in MANET is its vulnerability. In MANET nodes are very much vulnerable to attacks along with its data as well as data flowing through these nodes. One of the main reasons of these vulnerabilities is its communication policy which makes nodes interdependent for interaction and data flow. This mutual trust between nodes is exploited by attackers through injecting malicious node or replicating any legitimate node in MANET. One of these attacks is blackhole attack. In this study, the behavior of blackhole attack is discussed and have proposed a lightweight solution for blackhole attack which uses inbuilt functions.
A wireless sensor network (WSN) is composed of sensor nodes and a base station. In WSNs, constructing an efficient key-sharing scheme to ensure a secure communication is important. In this paper, we propose a new key-sharing scheme for groups, which shares a group key in a single broadcast without being dependent on the number of nodes. This scheme is based on geometric characteristics and has information-theoretic security in the analysis of transmitted data. We compared our scheme with conventional schemes in terms of communication traffic, computational complexity, flexibility, and security, and the results showed that our scheme is suitable for an Internet-of-Things (IoT) network.
In this paper, we present a framework for graph-based representation of relation between sensors and alert types in a security alert sharing platform. Nodes in a graph represent either sensors or alert types, while edges represent various relations between them, such as common type of reported alerts or duplicated alerts. The graph is automatically updated, stored in a graph database, and visualized. The resulting graph will be used by network administrators and security analysts as a visual guide and situational awareness tool in a complex environment of security alert sharing.
In an Internet of Things (IOT) network, each node (device) provides and requires services and with the growth in IOT, the number of nodes providing the same service have also increased, thus creating a problem of selecting one reliable service from among many providers. In this paper, we propose a scalable graph-based collaborative filtering recommendation algorithm, improved using trust to solve service selection problem, which can scale to match the growth in IOT unlike a central recommender which fails. Using this recommender, a node can predict its ratings for the nodes that are providing the required service and then select the best rated service provider.
Internet plays a crucial role in today's life, so the usage of online social network monotonically increasing. People can share multimedia information's fastly and keep in touch or communicate with friend's easily through online social network across the world. Security in authentication is a big challenge in online social network and authentication is a preliminary process for identifying legitimate user. Conventionally, we are using alphanumeric textbased password for authentication approach. But the main flaw points of text based password is highly vulnerable to attacks and difficulty of recalling password during authentication time due to the irregular use of passwords. To overcome the shortcoming of text passwords, we propose a Graphical Password authentication. An approach of Graphical Password is an authentication of amalgam of pictures. It is less vulnerable to attacks and human can easily recall pictures better than text. So the graphical password is a better alternative to text passwords. As the image uploads are increasing by users share through online site, privacy preserving has become a major problem. So we need a Caption Based Metadata Stratification of images for delivers an automatic suggestion of similar category already in database, it works by comparing the caption metadata of album with caption metadata already in database or extract the synonyms of caption metadata of new album for checking the similarity with caption metadata already in database. This stratification offers an enhanced automatic privacy prediction for uploaded images in online social network, privacy is an inevitable factor for uploaded images, and privacy violation is a major concern. So we propose an Automatic Policy Prediction for uploaded images that are classified by caption metadata. An automatic policy prediction is a hassle-free privacy setting proposed to the user.
This paper presents a new fractional-order hidden strange attractor generated by a chaotic system without equilibria. The proposed non-equilibrium fractional-order chaotic system (FOCS) is asymmetric, dissimilar, topologically inequivalent to typical chaotic systems and challenges the conventional notion that the presence of unstable equilibria is mandatory to ensure the existence of chaos. The new fractional-order model displays rich bifurcation undergoing a period doubling route to chaos, where the fractional order α is the bifurcation parameter. Study of the hidden attractor dynamics is carried out with the aid of phase portraits, sensitivity to initial conditions, fractal Lyapunov dimension, maximum Lyapunov exponents spectrum and bifurcation analysis. The minimum commensurate dimension to display chaos is determined. With a view to utilizing it in chaos based cryptology and coding information, a synchronisation control scheme is designed. Finally the theoretical analyses are validated by numerical simulation results which are in good agreement with the former.
Byte-addressable non-volatile memory technology is emerging as an alternative for DRAM for main memory. This new Non-Volatile Main Memory (NVMM) allows programmers to store important data in data structures in memory instead of serializing it to the file system, thereby providing a substantial performance boost. However, modern systems reorder memory operations and utilize volatile caches for better performance, making it difficult to ensure a consistent state in NVMM. Intel recently announced a new set of persistence instructions, clflushopt, clwb, and pcommit. These new instructions make it possible to implement fail-safe code on NVMM, but few workloads have been written or characterized using these new instructions. In this work, we describe how these instructions work and how they can be used to implement write-ahead logging based transactions. We implement several common data structures and kernels and evaluate the performance overhead incurred over traditional non-persistent implementations. In particular, we find that persistence instructions occur in clusters along with expensive fence operations, they have long latency, and they add a significant execution time overhead, on average by 20.3% over code with logging but without fence instructions to order persists. To deal with this overhead and alleviate the performance bottleneck, we propose to speculate past long latency persistency operations using checkpoint-based processing. Our speculative persistence architecture reduces the execution time overheads to only 3.6%.
User attribution process based on human inherent dynamics and preference is one area of research that is capable of elucidating and capturing human dynamics on the Internet. Prior works on user attribution concentrated on behavioral biometrics, 1-to-1 user identification process without consideration for individual preference and human inherent temporal tendencies, which is capable of providing a discriminatory baseline for online users, as well as providing a higher level classification framework for novel user attribution. To address these limitations, the study developed a temporal model, which comprises the human Polyphasia tendency based on Polychronic-Monochronic tendency scale measurement instrument and the extraction of unique human-centric features from server-side network traffic of 48 active users. Several machine-learning algorithms were applied to observe distinct pattern among the classes of the Polyphasia tendency, through which a logistic model tree was observed to provide higher classification accuracy for a 1-to-N user attribution process. The study further developed a high-level attribution model for higher-level user attribution process. The result from this study is relevant in online profiling process, forensic identification and profiling process, e-learning profiling process as well as in social network profiling process.
Improving e-government services by using data more effectively is a major focus globally. It requires Public Administrations to be transparent, accountable and provide trustworthy services that improve citizen confidence. However, despite all the technological advantages on developing such services and analysing security and privacy concerns, the literature does not provide evidence of frameworks and platforms that enable privacy analysis, from multiple perspectives, and take into account citizens' needs with regards to transparency and usage of citizens information. This paper presents the VisiOn (Visual Privacy Management in User Centric Open Requirements) platform, an outcome of a H2020 European Project. Our objective is to enable Public Administrations to analyse privacy and security from different perspectives, including requirements, threats, trust and law compliance. Finally, our platform-supported approach introduces the concept of Privacy Level Agreement (PLA) which allows Public Administrations to customise their privacy policies based on the privacy preferences of each citizen.
SQL injection attack (SQLIA) pose a serious security threat to the database driven web applications. This kind of attack gives attackers easily access to the application's underlying database and to the potentially sensitive information these databases contain. A hacker through specifically designed input, can access content of the database that cannot otherwise be able to do so. This is usually done by altering SQL statements that are used within web applications. Due to importance of security of web applications, researchers have studied SQLIA detection and prevention extensively and have developed various methods. In this research, after reviewing the existing research in this field, we present a new hybrid method to reduce the vulnerability of the web applications. Our method is specifically designed to detect and prevent SQLIA. Our proposed method is consists of three phases namely, the database design, implementation, and at the common gateway interface (CGI). Details of our approach along with its pros and cons are discussed in detail.