Biblio
Policies govern choices in the behavior of systems. They are applied to human behavior as well as to the behavior of autonomous systems but are defined differently in each case. Generally humans have the ability to interpret the intent behind the policies, to bring about their desired effects, even occasionally violating them when the need arises. In contrast, policies for automated systems fully define the prescribed behavior without ambiguity, conflicts or omissions. The increasing use of AI techniques and machine learning in autonomous systems such as drones promises to blur these boundaries and allows us to conceive in a similar way more flexible policies for the spectrum of human-autonomous systems collaborations. In coalition environments this spectrum extends across the boundaries of authority in pursuit of a common coalition goal and covers collaborations between human and autonomous systems alike. In social sciences, social exchange theory has been applied successfully to explain human behavior in a variety of contexts. It provides a framework linking the expected rewards, costs, satisfaction and commitment to explain and anticipate the choices that individuals make when confronted with various options. We discuss here how it can be used within coalition environments to explain joint decision making and to help formulate policies re-framing the concepts where appropriate. Social exchange theory is particularly attractive within this context as it provides a theory with “measurable” components that can be readily integrated in machine reasoning processes.
The paper introduces a smart system developed with sensors that is useful for internal and external security. The system is useful for people living in houses, apartments, high officials, bank, and offices. The system is developed in two phases one for internal security like home another is external security like open areas, streets. The system is consist of a mobile application, capacitive sensing, smart routing these valuable features to ensure safety of life and wealth. This security system is wireless sensor based which is an effective alternative of cctv cameras and other available security systems. Efficiency of this system is developed after going through practical studies and prototyping. The end result explains the feasibility rate, positive impact factor, reliability of the system. More research is possible in future based on this system this research explains that.
Compressed sensing can represent the sparse signal with a small number of measurements compared to Nyquist-rate samples. Considering the high-complexity of reconstruction algorithms in CS, recently compressive detection is proposed, which performs detection directly in compressive domain without reconstruction. Different from existing work that generally considers the measurements corrupted by dense noises, this paper studies the compressive detection problem when the measurements are corrupted by both dense noises and sparse errors. The sparse errors exist in many practical systems, such as the ones affected by impulse noise or narrowband interference. We derive the theoretical performance of compressive detection when the sparse error is either deterministic or random. The theoretical results are further verified by simulations.
Factoring is an important financial instrument for SMEs to solve liquidity problems, where the invoice is cashed to avoid late buyer payments. Unfortunately, this business model is risky as it relies on human interaction and involved actors (factors in particular) suffer from information asymmetry. One of the risks involved is 'double-financing': the event that an SME extracts funds from multiple factors. To reduce this asymmetry and increase the scalability of this important instrument, we propose a framework, DecReg, based on blockchain technology. We provide the protocols designed for this framework and present performance analysis. This framework will be deployed in practice as of February 2017 in the Netherlands.
Relationships like friendship to limit access to resources have been part of social network applications since their beginnings. Describing access control policies in terms of relationships is not particular to social networks and it arises naturally in many situations. Hence, we have recently seen several proposals formalizing different Relationship-based Access Control (ReBAC) models. In this paper, we introduce a class of Datalog programs suitable for modeling ReBAC and argue that this class of programs, that we called ReBAC Datalog policies, provides a very general framework to specify and implement ReBAC policies. To support our claim, we first formalize the merging of two recent proposals for modeling ReBAC, one based on hybrid logic and the other one based on path regular expressions. We present extensions to handle negative authorizations and temporal policies. We describe mechanism for policy analysis, and then discuss the feasibility of using Datalog-based systems as implementations.
Stealthy attackers often disable or tamper with system monitors to hide their tracks and evade detection. In this poster, we present a data-driven technique to detect such monitor compromise using evidential reasoning. Leveraging the fact that hiding from multiple, redundant monitors is difficult for an attacker, to identify potential monitor compromise, we combine alerts from different sets of monitors by using Dempster-Shafer theory, and compare the results to find outliers. We describe our ongoing work in this area.
In this paper, we analyze the security of cyber-physical systems using the ADversary VIew Security Evaluation (ADVISE) meta modeling approach, taking into consideration the efects of physical attacks. To build our model of the system, we construct an ontology that describes the system components and the relationships among them. The ontology also deines attack steps that represent cyber and physical actions that afect the system entities. We apply the ADVISE meta modeling approach, which admits as input our deined ontology, to a railway system use case to obtain insights regarding the system’s security. The ADVISE Meta tool takes in a system model of a railway station and generates an attack execution graph that shows the actions that adversaries may take to reach their goal. We consider several adversary proiles, ranging from outsiders to insider staf members, and compare their attack paths in terms of targeted assets, time to achieve the goal, and probability of detection. The generated results show that even adversaries with access to noncritical assets can afect system service by intelligently crafting their attacks to trigger a physical sequence of efects. We also identify the physical devices and user actions that require more in-depth monitoring to reinforce the system’s security.
Malware has become sophisticated and organizations don't have a Plan B when standard lines of defense fail. These failures have devastating consequences for organizations, such as sensitive information being exfiltrated. A promising avenue for improving the effectiveness of behavioral-based malware detectors is to combine fast (usually not highly accurate) traditional machine learning (ML) detectors with high-accuracy, but time-consuming, deep learning (DL) models. The main idea is to place software receiving borderline classifications by traditional ML methods in an environment where uncertainty is added, while software is analyzed by time-consuming DL models. The goal of uncertainty is to rate-limit actions of potential malware during deep analysis. In this paper, we describe Chameleon, a Linux-based framework that implements this uncertain environment. Chameleon offers two environments for its OS processes: standard - for software identified as benign by traditional ML detectors - and uncertain - for software that received borderline classifications analyzed by ML methods. The uncertain environment will bring obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated Chameleon with 113 applications from common benchmarks and 100 malware samples for Linux. Our results show that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption (crashed or hampered). We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software.
As a vital component of variety cyber attacks, malicious domain detection becomes a hot topic for cyber security. Several recent techniques are proposed to identify malicious domains through analysis of DNS data because much of global information in DNS data which cannot be affected by the attackers. The attackers always recycle resources, so they frequently change the domain - IP resolutions and create new domains to avoid detection. Therefore, multiple malicious domains are hosted by the same IPs and multiple IPs also host same malicious domains in simultaneously, which create intrinsic association among them. Hence, using the labeled domains which can be traced back from queries history of all domains to verify and figure out the association of them all. Graphs seem the best candidate to represent for this relationship and there are many algorithms developed on graph with high performance. A graph-based interface can be developed and transformed to the graph mining task of inferring graph node's reputation scores using improvements of the belief propagation algorithm. Then higher reputation scores the nodes reveal, the more malicious probabilities they infer. For demonstration, this paper proposes a malicious domain detection technique and evaluates on a real-world dataset. The dataset is collected from DNS data servers which will be used for building a DNS graph. The proposed technique achieves high performance in accuracy rates over 98.3%, precision and recall rates as: 99.1%, 98.6%. Especially, with a small set of labeled domains (legitimate and malicious domains), the technique can discover a large set of potential malicious domains. The results indicate that the method is strongly effective in detecting malicious domains.
With growing popularity of Android, it's attack surface has also increased. Prevalence of third party android marketplaces gives attackers an opportunity to plant their malicious apps in the mobile eco-system. To evade signature based detection, attackers often transform their malware, for instance, by introducing code level changes. In this paper we propose a lightweight static Permission Flow Graph (PFG) based approach to detect malware even when they have been transformed (obfuscated). A number of techniques based on behavioral analysis have also been proposed in the past; how-ever our interest lies in leveraging the permission framework alone to detect malware variants and transformations without considering behavioral aspects of a malware. Our proposed approach constructs Permission Flow Graph (PFG) for an Android App. Transformations performed at code level, often result in changing control flow, however, most of the time, the permission flow remains invariant. As a consequences, PFGs of transformed malware and non-transformed malware remain structurally similar as shown in this paper using state-of-the-art graph similarity algorithm. Furthermore, we propose graph based similarity metrics at both edge level and vertex level in order to bring forth the structural similarity of the two PFGs being compared. We validate our proposed methodology through machine learning algorithms. Results prove that our approach is successfully able to group together Android malware and its variants (transformations) together in the same cluster. Further, we demonstrate that our proposed approach is able to detect transformed malware with a detection accuracy of 98.26%, thereby ensuring that malicious Apps can be detected even after transformations.
With the advancement of unmanned aerial vehicles (UAV), 3D wireless mesh networks will play a crucial role in next generation mission critical wireless networks. Along with providing coverage over difficult terrain, it provides better spectral utilization through 3D spatial reuse. However, being a wireless network, 3D meshes are vulnerable to jamming/disruptive attacks. A jammer can disrupt the communication, as well as control of the network by intelligently causing interference to a set of nodes. This paper presents a distributed mechanism of avoiding jamming attacks by means of 3D spatial filtering where adaptive beam nulling is used to keep the jammer in null region in order to bypass jamming. Kalman filter based tracking mechanism is used to estimate the most likely trajectory of the jammer from noisy observation of the jammer's position. A beam null border is determined by calculating confidence region of jammer's current and next position estimates. An optimization goal is presented to calculate optimal beam null that minimizes the number of deactivated links while maximizing the higher value of confidence for keeping the jammer inside the null. The survivability of a 3D mesh network with a mobile jammer is studied through simulation that validates an 96.65% reduction in the number of jammed nodes.
Online controlled experiments (e.g., A/B tests) are now regularly used to guide product development and accelerate innovation in software. Product ideas are evaluated as scientific hypotheses, and tested in web sites, mobile applications, desktop applications, services, and operating systems. One of the key challenges for organizations that run controlled experiments is to come up with the right set of metrics [1] [2] [3]. Having good metrics, however, is not enough. In our experience of running thousands of experiments with many teams across Microsoft, we observed again and again how incorrect interpretations of metric movements may lead to wrong conclusions about the experiment's outcome, which if deployed could hurt the business by millions of dollars. Inspired by Steven Goodman's twelve p-value misconceptions [4], in this paper, we share twelve common metric interpretation pitfalls which we observed repeatedly in our experiments. We illustrate each pitfall with a puzzling example from a real experiment, and describe processes, metric design principles, and guidelines that can be used to detect and avoid the pitfall. With this paper, we aim to increase the experimenters' awareness of metric interpretation issues, leading to improved quality and trustworthiness of experiment results and better data-driven decisions.
Internet of Things (IoT) devices offer new sources of contextual information, which can be leveraged by applications to make smart decisions. However, due to the decentralized and heterogeneous nature of such devices - each only having a partial view of their surroundings - there is an inherent risk of uncertain, unreliable and inconsistent observations. This is a serious concern for applications making security related decisions, such as context-aware authentication. We propose and evaluate a middleware for IoT that provides trustworthy context for a collaborative authentication use case. It abstracts a dynamic and distributed fusion scheme that extends the Chair-Varshney (CV) optimal decision fusion rule such that it can be used in a highly dynamic IoT environment. We compare performance and cost trade-offs against regular CV. Experimental evaluation demonstrates that our solution outperforms CV with 10% in a highly dynamic IoT environments, with the ability to detect and mitigate unreliable sensors.
Binary embedding is an effective way for nearest neighbor (NN) search as binary code is storage efficient and fast to compute. It tries to convert real-value signatures into binary codes while preserving similarity of the original data. However, it greatly decreases the discriminability of original signatures due to the huge loss of information. In this paper, we propose a novel method double-bit quantization and weighting (DBQW) to solve the problem by mapping each dimension to double-bit binary code and assigning different weights according to their spatial relationship. The proposed method is applicable to a wide variety of embedding techniques, such as SH, PCA-ITQ and PCA-RR. Experimental comparisons on two datasets show that DBQW for NN search can achieve remarkable improvements in query accuracy compared to original binary embedding methods.
As one of the next generation network architectures, Named Data Networking(NDN) which features location-independent addressing and content caching makes it more suitable to be deployed into Vehicular Ad-hoc Network(VANET). However, a new attack pattern is found when NDN and VANET combine. This new attack is Interest Packet Popple Broadcast Diffusion Attack (PBDA). There is no mitigation strategies to mitigate PBDA. In this paper a mitigation strategies called RVMS based on node reputation value (RV) is proposed to detect malicious nodes. The node calculates the neighbor node RV by direct and indirect RV evaluation and uses Markov chain predict the current RV state of the neighbor node according to its historical RV. The RV state is used to decide whether to discard the interest packet. Finally, the effectiveness of the RVMS is verified through modeling and experiment. The experimental results show that the RVMS can mitigate PBDA.
Existing data management and searching system for Internet of Things uses centralized database. For this reason, security vulnerabilities are found in this system which consists of server such as IP spoofing, single point of failure and Sybil attack. This paper proposes data management system is based on blockchain which ensures security by using ECDSA digital signature and SHA-256 hash function. Location that is indicated as IP address of data owner and data name are transcribed in block which is included in the blockchain. Furthermore, we devise data manegement and searching method through analyzing block hash value. By using security properties of blockchain such as authentication, non-repudiation and data integrity, this system has advantage of security comparing to previous data management and searching system using centralized database or P2P networks.
Top-level domains play an important role in domain name system. Close attention should be paid to security of top level domains. In this paper, we found many configuration anomalies of top-level domains by analyzing their resource records. We got resource records of top-level domains from root name servers and authoritative servers of top-level domains. By comparing these resource records, we observed the anomalies in top-level domains. For example, there are 8 servers shared by more than one hundred top-level domains; Some TTL fields or SERIAL fields of resource records obtained on each NS servers of the same top-level domain were inconsistent; some authoritative servers of top-level domains were unreachable. Those anomalies may affect the availability of top-level domains. We hope that these anomalies can draw top-level domain administrators' attention to security of top-level domains.
PHP is one of the most popular web development tools in use today. A major concern though is the improper and insecure uses of the language by application developers, motivating the development of various static analyses that detect security vulnerabilities in PHP programs. However, many of these approaches do not handle recent, important PHP features such as object orientation, which greatly limits the use of such approaches in practice. In this paper, we present OOPIXY, a security analysis tool that extends the PHP security analyzer PIXY to support reasoning about object-oriented features in PHP applications. Our empirical evaluation shows that OOPIXY detects 88% of security vulnerabilities found in micro benchmarks. When used on real-world PHP applications, OOPIXY detects security vulnerabilities that could not be detected using state-of-the-art tools, retaining a high level of precision. We have contacted the maintainers of those applications, and two applications' development teams verified the correctness of our findings. They are currently working on fixing the bugs that lead to those vulnerabilities.
Dynamic software updating (DSU) is an extremely useful feature to be used during software evolution. It can be used to reduce down-time costs, for security enhancements, profiling and testing new functionalities. There are many studies and solutions on dynamic software updating regarding diverse problems introduced by the topic, but there is a lack of research which compares various approaches concerning supported changes and demands on resources. In this paper, we are comparing currently available concepts for Java programming language that deal with dynamically applied changes and measuring the impact of those changes on computer resource demands.