Biblio
Mobile applications frequently access sensitive personal information to meet user or business requirements. Because such information is sensitive in general, regulators increasingly require mobile-app developers to publish privacy policies that describe what information is collected. Furthermore, regulators have fined companies when these policies are inconsistent with the actual data practices of mobile apps. To help mobile-app developers check their privacy policies against their apps' code for consistency, we propose a semi-automated framework that consists of a policy terminology-API method map that links policy phrases to API methods that produce sensitive information, and information flow analysis to detect misalignments. We present an implementation of our framework based on a privacy-policy-phrase ontology and a collection of mappings from API methods to policy phrases. Our empirical evaluation on 477 top Android apps discovered 341 potential privacy policy violations.
Chang-Chen-Wang's (3,n) Secret grayscale image Sharing between n grayscale cover images method with participant Authentication and damaged pixels Repairing (SSAR) properties is analyzed; it restores the secret image from any three of the cover images used. We show that SSAR may fail, is not able fake participant recognizing, and has limited by 62.5% repairing ability. We propose SSAR (4,n) enhancement, SSAR-E, allowing 100% exact restoration of a corrupted pixel using any four of n covers, and recognizing a fake participant with the help of cryptographic hash functions with 5-bit values that allows better (vs. 4 bits) error detection. Using a special permutation with only one loop including all the secret image pixels, SSAR-E is able restoring all the secret image damaged pixels having just one correct pixel left. SSAR-E allows restoring the secret image to authorized parties only contrary to SSAR. The performance and size of cover images for SSAR-E are the same as for SSAR.
General survey stat that the main damage malware can cause is to slow down their PCs and perhaps crash some websites which is quite wrong, The Russian antivirus software developer teamed up with B2B International for a study worldwide recently, shown 36% of users lose money online as a result of a malware attack. Currently malware can't be detected by traditional way based anti-malware tools due to their polymorphic and/or metamorphic nature. Here we have improvised a current detection technique of malware based on mining Application Programming Interface (API) calls and developed the first public dataset to promote malware research. • In survey of cyber-attacks 6.2% financial attacks are due to malware which increase to 1.3 % in 2013 compared to 2012. • Financial data theft causes 27.6% to reach 28,400,000. Victims abused by this targeting malware countered 3,800,000, which is 18.6% greater than previous year. • Finance-committed malware, associated with Bitcoin has demonstrated the most dynamic development. Where's, Zeus is still top listed for playing important roles to steal banking credentials. Solutionary study stats that companies are spending a staggering amount of money in the aftermath of damaging attack: DDoS attacks recover \$6,500 per hour from malware and more than \$3,000 each time for up to 30 days to moderate and improve from malware attacks. [1]
Security is a well-known critical issue and exploitation of vulnerabilities is increasing in number, sophistication and damage. Furthermore, legacy systems tend to offer difficulty when upgrades are needed, specially when security recommendations are proposed. This paper presents a strategy for legacy systems based on three disciplines which guide the adoption of secure routines while avoid production drop. We present a prototype framework and discuss its success in providing security to the network of a power plant.
Present work deals with prediction of damage location in a composite cantilever beam using signal from optical fiber sensor coupled with a neural network with back propagation based learning mechanism. The experimental study uses glass/epoxy composite cantilever beam. Notch perpendicular to the axis of the beam and spanning throughout the width of the beam is introduced at three different locations viz. at the middle of the span, towards the free end of the beam and towards the fixed end of the beam. A plastic optical fiber of 6 cm gage length is mounted on the top surface of the beam along the axis of the beam exactly at the mid span. He-Ne laser is used as light source for the optical fiber and light emitting from other end of the fiber is converted to electrical signal through a converter. A three layer feed forward neural network architecture is adopted having one each input layer, hidden layer and output layer. Three features are extracted from the signal viz. resonance frequency, normalized amplitude and normalized area under resonance frequency. These three features act as inputs to the neural network input layer. The outputs qualitatively identify the location of the notch.
A robust retrieval system ensures that user experience is not damaged by the presence of poorly-performing queries. Such robustness can be measured by risk-sensitive evaluation measures, which assess the extent to which a system performs worse than a given baseline system. However, using a particular, single system as the baseline suffers from the fact that retrieval performance highly varies among IR systems across topics. Thus, a single system would in general fail in providing enough information about the real baseline performance for every topic under consideration, and hence it would in general fail in measuring the real risk associated with any given system. Based upon the Chi-squared statistic, we propose a new measure ZRisk that exhibits more promise since it takes into account multiple baselines when measuring risk, and a derivative measure called GeoRisk, which enhances ZRisk by also taking into account the overall magnitude of effectiveness. This paper demonstrates the benefits of ZRisk and GeoRisk upon TREC data, and how to exploit GeoRisk for risk-sensitive learning to rank, thereby making use of multiple baselines within the learning objective function to obtain effective yet risk-averse/robust ranking systems. Experiments using 10,000 topics from the MSLR learning to rank dataset demonstrate the efficacy of the proposed Chi-square statistic-based objective function.
Small sized unmanned aerial vehicles (UAV) play major roles in variety of applications for aerial explorations and surveillance, transport, videography/photography and other areas. However, some other real life applications of UAV have also been studied. One of them is as a 'Disaster Response' component. In a post disaster situation, the UAVs can be used for search and rescue, damage assessment, rapid response and other emergency operations. However, in a disaster response situation it is very challenging to predict whether the climatic conditions are suitable to fly the UAV. Also it is necessary for an efficient dynamic path planning technique for effective damage assessment. In this paper, such dynamic path planning algorithms have been proposed for micro-jet, a small sized fixed wing UAV for data collection and dissemination in a post disaster situation. The proposed algorithms have been implemented on paparazziUAV simulator considering different environment simulators (wind speed, wind direction etc.) and calibration parameters of UAV like battery level, flight duration etc. The results have been obtained and compared with baseline algorithm used in paparazziUAV simulator for navigation. It has been observed that, the proposed navigation techniques work well in terms of different calibration parameters (flight duration, battery level) and can be effective not only for shelter point detection but also to reserve battery level, flight time for micro-jet in a post disaster scenario. The proposed techniques take approximately 20% less time and consume approximately 19% less battery power than baseline navigation technique. From analysis of produced results, it has been observed that the proposed work can be helpful for estimating the feasibility of flying UAV in a disaster response situation. Finally, the proposed path planning techniques have been carried out during field test using a micro-jet. It has been observed that, our proposed dynamic path planning algorithms give proximate results compare to simulation in terms of flight duration and battery level consumption.
Cyber resilient organizations, their functions and computing infrastructures, should be tolerant towards rapid and unexpected changes in the environment. Information security is an organization-wide common mission; whose success strongly depends on efficient knowledge sharing. For this purpose, semantic wikis have proved their strength as a flexible collaboration and knowledge sharing platforms. However, there has not been notable academic research on how semantic wikis could be used as information security management platform in organizations for improved cyber resilience. In this paper, we propose to use semantic wiki as an agile information security management platform. More precisely, the wiki contents are based on the structured model of the NIST Special Publication 800-53 information security control catalogue that is extended in the research with the additional properties that support the information security management and especially the security control implementation. We present common uses cases to manage the information security in organizations and how the use cases can be implemented using the semantic wiki platform. As organizations seek cyber resilience, where focus is in the availability of cyber-related assets and services, we extend the control selection with option to focus on availability. The results of the study show that a semantic wiki based information security management and collaboration platform can provide a cost-efficient solution for improved cyber resilience, especially for small and medium sized organizations that struggle to develop information security with the limited resources.
Phishing is one of the major issues in cyber security. In phishing, attackers steal sensitive information from users by impersonation of legitimate websites. This information captured by phisher is used for variety of scenarios such as buying goods using online transaction illegally or sometime may sell the collected user data to illegal sources. Till date, various detection techniques are proposed by different researchers but still phishing detection remains a challenging problem. While phishing remains to be a threat for all users, persons with visual impairments fall under the soft target category, as they primarily depend on the non-visual web access mode. The persons with visual impairments solely depends on the audio generated by the screen readers to identify and comprehend a web page. This weak-link shall be harnessed by attackers in creating impersonate sites that produces same audio output but are visually different. This paper proposes a model titled "MASPHID" (Model for Assisting Screenreader users to Phishing Detection) to assist persons with visual impairments in detecting phishing sites which are aurally similar but visually dissimilar. The proposed technique is designed in such a manner that phishing detection shall be carried out without burdening the users with technical details. This model works against zeroday phishing attack and evaluate high accuracy.
In a discrete-time linear multi-agent control system, where the agents are coupled via an environmental state, knowledge of the environmental state is desirable to control the agents locally. However, since the environmental state depends on the behavior of the agents, sharing it directly among these agents jeopardizes the privacy of the agents' pro les, de ned as the combination of the agents' initial states and the sequence of local control inputs over time. A commonly used solution is to randomize the environmental state before sharing { this leads to a natural trade-o between the privacy of the agents' pro les and the variance of estimating the environmental state. By treating the multi-agent system as a probabilistic model of the environmental state parametrized by the agents' pro les, we show that when the agents' pro les is "-di erentially private, there is a lower bound on the `1 induced norm of the covariance matrix of the minimum-variance unbiased estimator of the environmental state. This lower bound is achieved by a randomized mechanism that uses Laplace noise.
Abstract—In this work, we study the problem of keeping the objective functions of individual agents "-differentially private in cloud-based distributed optimization, where agents are subject to global constraints and seek to minimize local objective functions. The communication architecture between agents is cloud-based – instead of communicating directly with each other, they oordinate by sharing states through a trusted cloud computer. In this problem, the difficulty is twofold: the objective functions are used repeatedly in every iteration, and the influence of erturbing them extends to other agents and lasts over time. To solve the problem, we analyze the propagation of perturbations on objective functions over time, and derive an upper bound on them. With the upper bound, we design a noise-adding mechanism that randomizes the cloudbased distributed optimization algorithm to keep the individual objective functions "-differentially private. In addition, we study the trade-off between the privacy of objective functions and the performance of the new cloud-based distributed optimization algorithm with noise. We present simulation results to numerically verify the theoretical results presented.
Given a model with multiple input parameters, and multiple possible sources for collecting data for those parameters, a data collection strategy is a way of deciding from which sources to sample data, in order to reduce the variance on the output of the model. Cain and Van Moorsel have previously formulated the problem of optimal data collection strategy, when each arameter can be associated with a prior normal distribution, and when sampling is associated with a cost. In this paper, we present ADaCS, a new tool built as an extension of PRISM, which automatically analyses all possible data collection strategies for a model, and selects the optimal one. We illustrate ADaCS on attack trees, which are a structured approach to analyse the impact and the likelihood of success of attacks and defenses on computer and socio-technical systems. Furthermore, we introduce a new strategy exploration heuristic that significantly improves on a brute force approach.
Abstract—Lateral movement-based attacks are increasingly leading to compromises in large private and government networks, often resulting in information exfiltration or service disruption. Such attacks are often slow and stealthy and usually evade existing security products. To enable effective detection of such attacks, we present a new approach based on graph-based modeling of the security state of the target system and correlation of diverse indicators of anomalous host behavior. We believe that irrespective of the specific attack vectors used, attackers typically establish a command and control channel to operate, and move in the target system to escalate their privileges and reach sensitive areas. Accordingly, we identify important features of command and control and lateral movement activities and extract them from internal and external communication traffic. Driven by the analysis of the features, we propose the use of multiple anomaly detection techniques to identify compromised hosts. These methods include Principal Component Analysis, k-means clustering, and Median Absolute Deviation-based utlier detection. We evaluate the accuracy of identifying compromised hosts by using injected attack traffic in a real enterprise network dataset, for various attack communication models. Our results show that the proposed approach can detect infected hosts with high accuracy and a low false positive rate.
The widespread adoption of Location-Based Services (LBSs) has come with controversy about privacy. While leveraging location information leads to improving services through geo-contextualization, it rises privacy concerns as new knowledge can be inferred from location records, such as home/work places, habits or religious beliefs. To overcome this problem, several Location Privacy Protection Mechanisms (LPPMs) have been proposed in the literature these last years. However, every mechanism comes with its own configuration parameters that directly impact the privacy guarantees and the resulting utility of protected data. In this context, it can be difficult for a non-expert system designer to choose appropriate configuration parameters to use according to the expected privacy and utility. In this paper, we present a framework enabling the easy configuration of LPPMs. To achieve that, our framework performs an offline, in-depth automated analysis of LPPMs to provide the formal relationship between their configuration parameters and both privacy and the utility metrics. This framework is modular: by using different metrics, a system designer is able to fine-tune her LPPM according to her expected privacy and utility guarantees (i.e., the guarantee itself and the level of this guarantee). To illustrate the capability of our framework, we analyse Geo-Indistinguishability (a well known differentially private LPPM) and we provide the formal relationship between its &epsis; configuration parameter and two privacy and utility metrics.
In the last few years, a shift from mass production to mass customisation is observed in the industry. Easily reprogrammable robots that can perform a wide variety of tasks are desired to keep up with the trend of mass customisation while saving costs and development time. Learning by Demonstration (LfD) is an easy way to program the robots in an intuitive manner and provides a solution to this problem. In this work, we discuss and evaluate LAP, a three-stage LfD method that conforms to the criteria for the high-mix-low-volume (HMLV) industrial settings. The algorithm learns a trajectory in the task space after which small segments can be adapted on-the-fly by using a human-in-the-loop approach. The human operator acts as a high-level adaptation, correction and evaluation mechanism to guide the robot. This way, no sensors or complex feedback algorithms are needed to improve robot behaviour, so errors and inaccuracies induced by these subsystems are avoided. After the system performs at a satisfactory level after the adaptation, the operator will be removed from the loop. The robot will then proceed in a feed-forward fashion to optimise for speed. We demonstrate this method by simulating an industrial painting application. A KUKA LBR iiwa is taught how to draw an eight figure which is reshaped by the operator during adaptation.
Network functions (NFs), like firewall, NAT, IDS, have been widely deployed in today’s modern networks. However, currently there is no standard specification or modeling language that can accurately describe the complexity and diversity of different NFs. Recently there have been research efforts to propose NF models. However, they are often generated manually and thus error-prone. This paper proposes a method to automatically synthesize NF models via program analysis. We develop a tool called NFactor, which conducts code refactoring and program slicing on NF source code, in order to generate its forwarding model. We demonstrate its usefulness on two NFs and evaluate its correctness. A few applications of NFactor are described, including network verification.
How to debug large networks is always a challenging task. Software Defined Network (SDN) offers a centralized con- trol platform where operators can statically verify network policies, instead of checking configuration files device-by-device. While such a static verification is useful, it is still not enough: due to data plane faults, packets may not be forwarded according to control plane policies, resulting in network faults at runtime. To address this issue, we present VeriDP, a tool that can continuously monitor what we call control-data plane consistency, defined as the consistency between control plane policies and data plane forwarding behaviors. We prototype VeriDP with small modifications of both hardware and software SDN switches, and show that it can achieve a verification speed of 3 μs per packet, with a false negative rate as low as 0.1%, for the Stanford backbone and Internet2 topologies. In addition, when verification fails, VeriDP can localize faulty switches with a probability as high as 96% for fat tree topologies.