Biblio
In this paper, we propose a novel method, based on keystroke dynamics, to distinguish between fake and truthful personal information written via a computer keyboard. Our method does not need any prior knowledge about the user who is providing data. To our knowledge, this is the first work that associates the typing human behavior with the production of lies regarding personal information. Via experimental analysis involving 190 subjects, we assess that this method is able to distinguish between truth and lies on specific types of autobiographical information, with an accuracy higher than 75%. Specifically, for information usually required in online registration forms (e.g., name, surname and email), the typing behavior diverged significantly between truthful or untruthful answers. According to our results, keystroke analysis could have a great potential in detecting the veracity of self-declared information, and it could be applied to a large number of practical scenarios requiring users to input personal data remotely via keyboard.
Over the years a lot of effort has been put on solving extensibility problems, while retaining important software engineering properties such as modular type-safety and separate compilation. Most previous work focused on operations that traverse and process extensible Abstract Syntax Tree (AST) structures. However, there is almost no work on operations that build such extensible ASTs, including parsing. This paper investigates solutions for the problem of modular parsing. We focus on semantic modularity and not just syntactic modularity. That is, the solutions should not only allow complete parsers to be built out of modular parsing components, but also enable the parsing components to be modularly type-checked and separately compiled. We present a technique based on parser combinators that enables modular parsing. Interestingly, the modularity requirements for modular parsing rule out several existing parser combinator approaches, which rely on some non-modular techniques. We show that Packrat parsing techniques, provide solutions for such modularity problems, and enable reasonable performance in a modular setting. Extensibility is achieved using multiple inheritance and Object Algebras. To evaluate the approach we conduct a case study based on the âTypes and Programming Languagesâ interpreters. The case study shows the effectiveness at reusing parsing code from existing interpreters, and the total parsing code is 69% shorter than an existing code base using a non-modular parsing approach.
Programming language definitions assign formal meaning to complete programs. Programmers, however, spend a substantial amount of time interacting with incomplete programs -- programs with holes, type inconsistencies and binding inconsistencies -- using tools like program editors and live programming environments (which interleave editing and evaluation). Semanticists have done comparatively little to formally characterize (1) the static and dynamic semantics of incomplete programs; (2) the actions available to programmers as they edit and inspect incomplete programs; and (3) the behavior of editor services that suggest likely edit actions to the programmer based on semantic information extracted from the incomplete program being edited, and from programs that the system has encountered in the past. As such, each tool designer has largely been left to develop their own ad hoc heuristics.
This paper serves as a vision statement for a research program that seeks to develop these "missing" semantic foundations. Our hope is that these contributions, which will take the form of a series of simple formal calculi equipped with a tractable metatheory, will guide the design of a variety of current and future interactive programming tools, much as various lambda calculi have guided modern language designs. Our own research will apply these principles in the design of Hazel, an experimental live lab notebook programming environment designed for data science tasks. We plan to co-design the Hazel language with the editor so that we can explore concepts such as edit-time semantic conflict resolution mechanisms and mechanisms that allow library providers to install library-specific editor services.
In parallel with the meteoric rise of mobile software, we are witnessing an alarming escalation in the number and sophistication of the security threats targeted at mobile platforms, particularly Android, as the dominant platform. While existing research has made significant progress towards detection and mitigation of Android security, gaps and challenges remain. This paper contributes a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area. We have carefully followed the systematic literature review process, and analyzed the results of more than 300 research papers, resulting in the most comprehensive and elaborate investigation of the literature in this area of research. The systematic analysis of the research literature has revealed patterns, trends, and gaps in the existing literature, and underlined key challenges and opportunities that will shape the focus of future research efforts.
Software-defined networking (SDN) overcomes many limitations of traditional networking architectures because of its programmable and flexible nature. Security applications, for instance, can dynamically reprogram a network to respond to ongoing threats in real time. However, the same flexibility also creates risk, since it can be used against the network. Current SDN architectures potentially allow adversaries to disrupt one or more SDN system components and to hide their actions in doing so. That makes assurance and reasoning about past network events more difficult, if not impossible. In this paper, we argue that an SDN architecture must incorporate various notions of accountability for achieving systemwide cyber resiliency goals. We analyze accountability based on a conceptual framework, and we identify how that analysis fits in with the SDN architecture's entities and processes. We further consider a case study in which accountability is necessary for SDN network applications, and we discuss the limits of current approaches.
The accessibility of on-chip embedded infrastructure for test, reconfiguration, or debug poses a serious security problem. Access mechanisms based on IEEE Std 1149.1 (JTAG), and especially reconfigurable scan networks (RSNs), as allowed by IEEE Std 1500, IEEE Std 1149.1-2013, and IEEE Std 1687 (IJTAG), require special care in the design and development. This work studies the threats to trustworthy data transmission in RSNs posed by untrusted components within the RSN and external interfaces. We propose a novel scan pattern generation method that finds trustworthy access sequences to prevent sniffing and spoofing of transmitted data in the RSN. For insecure RSNs, for which such accesses do not exist, we present an automated transformation that improves the security and trustworthiness while preserving the accessibility to attached instruments. The area overhead is reduced based on results from trustworthy access pattern generation. As a result, sensitive data is not exposed to untrusted components in the RSN, and compromised data cannot be injected during trustworthy accesses.
The blockchain emerges as an innovative tool that has the potential to positively impact the way we design a number of online applications today. In many ways, the blockchain technology is, however, still not mature enough to cater for industrial standards. Namely, existing Byzantine tolerant permission-based blockchain deployments can only scale to a limited number of nodes. These systems typically require that all transactions (and their order of execution) are publicly available to all nodes in the system, which comes at odds with common data sharing practices in the industry, and prevents a centralized regulator from overseeing the full blockchain system. In this paper, we propose a novel blockchain architecture devised specifically to meet industrial standards. Our proposal leverages the notion of satellite chains that can privately run different consensus protocols in parallel - thereby considerably boosting the scalability premises of the system. Our solution also accounts for a hands-off regulator that oversees the entire network, enforces specific policies by means of smart contracts, etc. We implemented our solution and integrated it with Hyperledger Fabric v0.6.
Cloud computing has become a part of people's lives. However, there are many unresolved problems with security of this technology. According to the assessment of international experts in the field of security, there are risks in the appearance of cloud collusion in uncertain conditions. To mitigate this type of uncertainty, and minimize data redundancy of encryption together with harms caused by cloud collusion, modified threshold Asmuth-Bloom and weighted Mignotte secret sharing schemes are used. We show that if the villains do know the secret parts, and/or do not know the secret key, they cannot recuperate the secret. If the attackers do not know the required number of secret parts but know the secret key, the probability that they obtain the secret depends the size of the machine word in bits that is less than 1/2(1-1). We demonstrate that the proposed scheme ensures security under several types of attacks. We propose four approaches to select weights for secret sharing schemes to optimize the system behavior based on data access speed: pessimistic, balanced, and optimistic, and on speed per price ratio. We use the approximate method to improve the detection, localization and error correction accuracy under cloud parameters uncertainty.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.
Named Data Networking (NDN) is a content-oriented future Internet architecture, which well suits the increasingly mobile and information-intensive applications that dominate today's Internet. NDN relies on in-network caching to facilitate content delivery. This makes it challenging to enforce access control since the content has been cached in the routers and the content producer has lost the control over it. Due to its salient advantages in content delivery, network coding has been introduced into NDN to improve content delivery effectiveness. In this paper, we design ACNC, the first Access Control solution specifically for Network Coding-based NDN. By combining a novel linear AONT (All Or Nothing Transform) and encryption, we can ensure that only the legitimate user who possesses the authorization key can successfully recover the encoding matrix for network coding, and hence can recover the content being transmitted. In addition, our design has two salient merits: 1) the linear AONT well suits the linear nature of network coding; 2) only one vector of the encoding matrix needs to be encrypted/decrypted, which only incurs small computational overhead. Security analysis and experimental evaluation in ndnSIM show that our design can successfully enforce access control on network coding-based NDN with an acceptable overhead.
Cryptanalysis (the study of methods to read encrypted information without knowledge of the encryption key) has traditionally been separated into mathematical analysis of weaknesses in cryptographic algorithms, on the one hand, and side-channel attacks which aim to exploit weaknesses in the implementation of encryption and decryption algorithms. Mathematical analysis generally makes assumptions about the algorithm with the aim of reconstructing the key relating plain text to cipher text through brute-force methods. Complexity issues tend to dominate the systematic search for keys. To date, there has been very little research on a third cryptanalysis method: learning the key through convergence based on associations between plain text and cipher text. Recent advances in deep learning using multi-layered artificial neural networks (ANNs) provide an opportunity to reassess the role of deep learning architectures in next generation cryptanalysis methods based on neurocryptography (NC). In this paper, we explore the capability of deep ANNs to decrypt encrypted messages with minimum knowledge of the algorithm. From the experimental results, it can be concluded that DNNs can encrypt and decrypt to levels of accuracy that are not 100% because of the stochastic aspects of ANNs. This aspect may however be useful if communication is under cryptanalysis attack, since the attacker will not know for certain that key K used for encryption and decryption has been found. Also, uncertainty concerning the architecture used for encryption and decryption adds another layer of uncertainty that has no counterpart in traditional cryptanalysis.
This article derives trade-offs between three basic costs of a parallel algorithm: synchronization, data movement, and computational cost. These trade-offs are lower bounds on the execution time of the algorithm that are independent of the number of processors but dependent on the problem size. Therefore, they provide lower bounds on the execution time of any parallel schedule of an algorithm computed by a system composed of any number of homogeneous processors, each with associated computational, communication, and synchronization costs. We employ a theoretical model that measures the amount of work and data movement as a maximum over that incurred along any execution path during the parallel computation. By considering this metric rather than the total communication volume over the whole machine, we obtain new insights into the characteristics of parallel schedules for algorithms with nontrivial dependency structures. We also present reductions from BSP and LogGP algorithms to our execution model, extending our lower bounds to these two models of parallel computation. We first develop our results for general dependency graphs and hypergraphs based on their expansion properties, and then we apply the theorem to a number of specific algorithms in numerical linear algebra, namely triangular substitution, Cholesky factorization, and stencil computations. We represent some of these algorithms as families of dependency graphs. We derive their communication lower bounds by studying the communication requirements of the hypergraph structures shared by these dependency graphs. In addition to these lower bounds, we introduce a new communication-efficient parallelization for stencil computation algorithms, which is motivated by results of our lower bound analysis and the properties of previously existing parallelizations of the algorithms.
Social media plays an integral part in individual's everyday lives as well as for companies. Social media brings numerous benefits in people's lives such as to keep in touch with close ones and specially with relatives who are overseas, to make new friends, buy products, share information and much more. Unfortunately, several threats also accompany the countless advantages of social media. The rapid growth of the online social networking sites provides more scope for criminals and cyber-criminals to carry out their illegal activities. Hackers have found different ways of exploiting these platform for their malicious gains. This research englobes some of the common threats on social media such as spam, malware, Trojan horse, cross-site scripting, industry espionage, cyber-bullying, cyber-stalking, social engineering attacks. The main purpose of the study to elaborates on phishing, malware and click-jacking attacks. The main purpose of the research, there is no particular research available on the forensic investigation for Facebook. There is no particular forensic investigation methodology and forensic tools available which can follow on the Facebook. There are several tools available to extract digital data but it's not properly tested for Facebook. Forensics investigation tool is used to extract evidence to determine what, when, where, who is responsible. This information is required to ensure that the sufficient evidence to take legal action against criminals.
Industrial Control Systems (ICS) are found in critical infrastructure such as for power generation and water treatment. When security requirements are incorporated into an ICS, one needs to test the additional code and devices added do improve the prevention and detection of cyber attacks. Conducting such tests in legacy systems is a challenge due to the high availability requirement. An approach using Timed Automata (TA) is proposed to overcome this challenge. This approach enables assessment of the effectiveness of an attack detection method based on process invariants. The approach has been demonstrated in a case study on one stage of a 6- stage operational water treatment plant. The model constructed captured the interactions among components in the selected stage. In addition, a set of attacks, attack detection mechanisms, and security specifications were also modeled using TA. These TA models were conjoined into a network and implemented in UPPAAL. The models so implemented were found effective in detecting the attacks considered. The study suggests the use of TA as an effective tool to model an ICS and study its attack detection mechanisms as a complement to doing so in a real plant-operational or under design.
This presents a new model to support empirical failure probability estimation for a software-intensive system. The new element of the approach is that it combines the results of testing using a simulated hardware platform with results from testing on the real platform. This approach addresses a serious practical limitation of a technique known as statistical testing. This limitation will be called the test time expansion problem (or simply the 'time problem'), which is that the amount of testing required to demonstrate useful levels of reliability over a time period T is many orders of magnitude greater than T. The time problem arises whether the aim is to demonstrate ultra-high reliability levels for protection system, or to demonstrate any (desirable) reliability levels for continuous operation ('high demand') systems. Specifically, the theoretical feasibility of a platform simulation approach is considered since, if this is not proven, questions of practical implementation are moot. Subject to the assumptions made in the paper, theoretical feasibility is demonstrated.
The discussion of threats and vulnerabilities in Industrial Control Systems has gained popularity during the last decade due to the increase in interest and growing concern to secure these systems. In order to provide an overview of the complete landscape of these threats and vulnerabilities this contribution provides a tiered security analysis of the assets that constitute Industrial Control Systems. The identification of assets is obtained from a generalization of the system's architecture. Additionally, the security analysis is complemented by discussing security countermeasures and solutions that can be used to counteract the vulnerabilities and increase the security of control systems.
Social media has become an important platform for people to express opinions, share information and communicate with others. Detecting and tracking topics from social media can help people grasp essential information and facilitate many security-related applications. As social media texts are usually short, traditional topic evolution models built based on LDA or HDP often suffer from the data sparsity problem. Recently proposed topic evolution models are more suitable for short texts, but they need to manually specify topic number which is fixed during different time period. To address these issues, in this paper, we propose a nonparametric topic evolution model for social media short texts. We first propose the recurrent semantic dependent Chinese restaurant process (rsdCRP), which is a nonparametric process incorporating word embeddings to capture semantic similarity information. Then we combine rsdCRP with word co-occurrence modeling and build our short-text oriented topic evolution model sdTEM. We carry out experimental studies on Twitter dataset. The results demonstrate the effectiveness of our method to monitor social media topic evolution compared to the baseline methods.
Unmanned Aerial Vehicles (UAVs) are autonomous aircraft that, when equipped with wireless communication interfaces, can share data among themselves when in communication range. Compared to single UAVs, using multiple UAVs as a collaborative swarm is considerably more effective for target tracking, reconnaissance, and surveillance missions because of their capacity to tackle complex problems synergistically. Success rates in target detection and tracking depend on map coverage performance, which in turn relies on network connectivity between UAVs to propagate surveillance results to avoid revisiting already observed areas. In this paper, we consider the problem of optimizing three objectives for a swarm of UAVs: (a) target detection and tracking, (b) map coverage, and (c) network connectivity. Our approach, Dual-Pheromone Clustering Hybrid Approach (DPCHA), incorporates a multi-hop clustering and a dual-pheromone ant-colony model to optimize these three objectives. Clustering keeps stable overlay networks, while attractive and repulsive pheromones mark areas of detected targets and visited areas. Additionally, DPCHA introduces a disappearing target model for dealing with temporarily invisible targets. Extensive simulations show that DPCHA produces significant improvements in the assessment of coverage fairness, cluster stability, and connection volatility. We compared our approach with a pure dual- pheromone approach and a no-base model, which removes the base station from the model. Results show an approximately 50% improvement in map coverage compared to the pure dual-pheromone approach.
In this paper, we provide insights towards achieving more robust automatic facial expression recognition in smart environments based on our benchmark with three labeled facial expression databases. These databases are selected to test for desktop, 3D and smart environment application scenarios. This work is meant to provide a neutral comparison and guidelines for developers and researchers interested to integrate facial emotion recognition technologies in their applications, understand its limitations and adaptation as well as enhancement strategies. We also introduce and compare three different metrics for finding the primary expression in a time window of a displayed emotion. In addition, we outline facial emotion recognition limitations and enhancements for smart environments and non-frontal setups. By providing our comparison and enhancements we hope to build a bridge from affective computing research and solution providers to application developers that like to enhance new applications by including emotion based user modeling.