Biblio
We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+â2â2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2âε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2â2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2â2/k+δ)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1âε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).
There are many everyday situations in which users need to enter their user identification (user ID), such as logging in to computer systems and entering secure offices. In such situations, contactless passive IC cards are convenient because users can input their user ID simply by passing the card over a reader. However, these cards cannot be used for successive interactions. To address this issue, we propose AccelTag, a contactless IC card equipped with an acceleration sensor and a liquid crystal display (LCD). AccelTag utilizes high-function RFID technology so that the acceleration sensor and the LCD can also be driven by a wireless power supply. With its built-in acceleration sensor, AccelTag can acquire its direction and movement when it is waved over the reader. We demonstrate several applications using AccelTag, such as displaying several types of information in the card depending on the user's requirements.
In this paper, a distributed architecture for the implementation of smart city has been proposed to facilitate various smart features like solid waste management, efficient urban mobility and public transport, smart parking, robust IT connectivity, safety and security of citizens and a roadmap for achieving it. How massive volume of IoT data can be analyzed and a layered architecture of IoT is explained. Why data integration is important for analyzing and processing of data collected by the different smart devices like sensors, actuators and RFIDs is discussed. The wireless sensor network can be used to sense the data from various locations but there has to be more to it than stuffing sensors everywhere for everything. Why only the sensor is not sufficient for data collection and how human beings can be used to collect data is explained. There is some communication protocols between the volunteers engaged in collecting data to restrict the sharing of data and ensure that the target area is covered with minimum numbers of volunteers. Every volunteer should cover some predefined area to collect data. Then the proposed architecture model is having one central server to store all data in a centralized server. The data processing and the processing of query being made by the user is taking place in centralized server.
Virtual Routers (VRs) are increasingly common in cloud environments. VRs route traffic between network segments and support network services. Routers, including VRs, have been the target of several recent high-profile attacks, emphasizing the need for more security measures, including security monitoring. However, existing agent-based monitoring systems are incompatible with a VR's temporary nature, stripped-down operating system, and placement in the cloud. As a result, VRs are often not monitored, leading to undetected security incidents. This paper proposes a new security monitoring design that leverages virtualization instead of in-guest agents. Its hypervisor-based system, Arav, scrutinizes VRs by novel application of Virtual Machine Introspection (VMI) breakpoint injection. Arav monitored and addressed security-related events in two common VRs, pfSense and VyOS, and detected four attacks against two popular VR services, Quagga and OpenVPN. Arav's performance overhead is negligible, less than 0.63%, demonstrating VMI's utility in monitoring virtual machines unsuitable for traditional security monitoring.
The SCADA infrastructure is a key component for power grid operations. Securing the SCADA infrastructure against cyber intrusions is thus vital for a well-functioning power grid. However, the task remains a particular challenge, not the least since not all available security mechanisms are easily deployable in these reliability-critical and complex, multi-vendor environments that host modern systems alongside legacy ones, to support a range of sensitive power grid operations. This paper examines how effective a few countermeasures are likely to be in SCADA environments, including those that are commonly considered out of bounds. The results show that granular network segmentation is a particularly effective countermeasure, followed by frequent patching of systems (which is unfortunately still difficult to date). The results also show that the enforcement of a password policy and restrictive network configuration including whitelisting of devices contributes to increased security, though best in combination with granular network segmentation.
Despite significant recent advances, the effectiveness of symbolic execution is limited when used to test complex, real-world software. One of the main scalability challenges is related to constraint solving: large applications and long exploration paths lead to complex constraints, often involving big arrays indexed by symbolic expressions. In this paper, we propose a set of semantics-preserving transformations for array operations that take advantage of contextual information collected during symbolic execution. Our transformations lead to simpler encodings and hence better performance in constraint solving. The results we obtain are encouraging: we show, through an extensive experimental analysis, that our transformations help to significantly improve the performance of symbolic execution in the presence of arrays. We also show that our transformations enable the analysis of new code, which would be otherwise out of reach for symbolic execution.
Developing Big Data Analytics workloads often involves trial and error debugging, due to the unclean nature of datasets or wrong assumptions made about data. When errors (e.g., program crash, outlier results, etc.) arise, developers are often interested in identifying a subset of the input data that is able to reproduce the problem. BigSift is a new faulty data localization approach that combines insights from automated fault isolation in software engineering and data provenance in database systems to find a minimum set of failure-inducing inputs. BigSift redefines data provenance for the purpose of debugging using a test oracle function and implements several unique optimizations, specifically geared towards the iterative nature of automated debugging workloads. BigSift improves the accuracy of fault localizability by several orders-of-magnitude ($\sim$103 to 107×) compared to Titian data provenance, and improves performance by up to 66× compared to Delta Debugging, an automated fault-isolation technique. For each faulty output, BigSift is able to localize fault-inducing data within 62% of the original job running time.
The SDN (Software Defined Networking) paradigm rings flexibility to the network management and is an enabler to offer huge opportunities for network programmability. And, to solve the scalability issue raised by the centralized architecture of SDN, multi-controllers deployment (or distributed controllers system) is envisioned. In this paper, we focus on increasing the diversity of SDN control plane so as to enhance the network security. Our goal is to limit the ability of a malicious controller to compromise its neighboring controllers, and by extension, the rest of the controllers. We investigate a heterogeneous Susceptible-Infectious-Susceptible (SIS) epidemic model to evaluate the security performance and propose a coloring algorithm to increase the diversity based on community detection. And the simulation results demonstrate that our algorithm can reduce infection rate in control plane and our work shows that diversity must be introduced in network design for network security.
SDN is a new network framework which can be controlled and defined by software programming, and OpenFlow is the communication protocol between SDN controller plane and data plane. With centralized control of SDN, the network is more vulnerable encounter APT than traditional network. After deeply analyzing the process of APT at each stage in SDN, this paper proposes the APT detection method based on HMM, which can fully reflect the relationship between attack behavior and APT stage. Experiment shows that the method is more accurate to detect APT in SDN, and less overhead.
Mobile Healthcare Networks (MHN) continuouslycollect the patients' health data sensed by wearable devices, andanalyze the collected data pre-processed by servers combinedwith medical histories, such that disease diagnosis and treatmentare improved, and the heavy burden on the existing healthservices is released. However, the network is vulnerable to Sybilattacks, which would degrade network performance, disruptproceedings, manipulate data or cheat others maliciously. What'smore, the user is reluctant to leak identity privacy, so the identityprivacy preserving makes Sybil defenses more difficult. One ofthe best choices is mutually authenticating each other with noidentity information involved. Thus, we propose a fine-grainedauthentication scheme based on Attribute-Based Signature (ABS)using lattice assumption, where a signer is authorized by an at-tribute set instead of single identity string. This ABS scheme usesFiat-Shamir framework and supports flexible threshold signaturepredicates. Moreover, to anonymously guarantee integrity andavailability of health data in MHN, we design an anonymousanti-Sybil attack protocol based on our ABS scheme, so thatSybil attacks are prevented. As there is no linkability betweenidentities and services, the users' identity privacy is protected. Finally, we have analyzed the security and simulated the runningtime for our proposed ABS scheme.
The pattern recognition in the sparse representation (SR) framework has been very successful. In this model, the test sample can be represented as a sparse linear combination of training samples by solving a norm-regularized least squares problem. However, the value of regularization parameter is always indiscriminating for the whole dictionary. To enhance the group concentration of the coefficients and also to improve the sparsity, we propose a new SR model called adaptive sparse representation classifier(ASRC). In ASRC, a sparse coefficient strengthened item is added in the objective function. The model is solved by the artificial bee colony (ABC) algorithm with variable step to speed up the convergence. Also, a partition strategy for large scale dictionary is adopted to lighten bee's load and removes the irrelevant groups. Through different data sets, we empirically demonstrate the property of the new model and its recognition performance.
Predict software program reliability turns into a completely huge trouble in these days. Ordinary many new software programs are introducing inside the marketplace and some of them dealing with failures as their usage/managing is very hard. and plenty of shrewd strategies are already used to are expecting software program reliability. In this paper we're giving a sensible knowledge and the difference among those techniques with my new method. As a result, the prediction fashions constructed on one dataset display a extensive decrease in their accuracy when they are used with new statistics. The aim of this assessment, SE issues which can be of sensible importance are software development/cost estimation, software program reliability prediction, and so forth, and also computing its broaden computational equipment with enhanced power, scalability, flexibility and that can engage more successfully with human beings.
Support vector machines (SVMs) have been widely used for classification in machine learning and data mining. However, SVM faces a huge challenge in large scale classification tasks. Recent progresses have enabled additive kernel version of SVM efficiently solves such large scale problems nearly as fast as a linear classifier. This paper proposes a new accelerated mini-batch stochastic gradient descent algorithm for SVM classification with additive kernel (AK-ASGD). On the one hand, the gradient is approximated by the sum of a scalar polynomial function for each feature dimension; on the other hand, Nesterov's acceleration strategy is used. The experimental results on benchmark large scale classification data sets show that our proposed algorithm can achieve higher testing accuracies and has faster convergence rate.
Machine learning algorithms have been proven to be vulnerable to a special type of attack in which an active adversary manipulates the training data of the algorithm in order to reach some desired goal. Although this type of attack has been proven in previous work, it has not been examined in the context of a data stream, and no work has been done to study a targeted version of the attack. Furthermore, current literature does not provide any metrics that allow a system to detect these attack while they are happening. In this work, we examine the targeted version of this attack on a Support Vector Machine(SVM) that is learning from a data stream, and examine the impact that this attack has on current metrics that are used to evaluate a models performance. We then propose a new metric for detecting these attacks, and compare its performance against current metrics.
Wide Area Monitoring Systems (WAMSs) provide an essential building block for Smart Grid supervision and control. Distributed Phasor Measurement Units (PMUs) allow accurate clock-synchronized measurements of voltage and current phasors (amplitudes, phase angles) and frequencies. The sensor data from PMUs provide situational awareness in the grid, and are used as input for control decisions. A modification of sensor data can severely impact grid stability, overall power supply, and physical devices. Since power grids are critical infrastructures, WAMSs are tempting targets for all kinds of attackers, including well-organized and motivated adversaries such as terrorist groups or adversarial nation states. Such groups possess sufficient resources to launch sophisticated attacks. In this paper, we provide an in-depth analysis of attack possibilities on WAMSs. We model the dependencies and building blocks of Advanced Persistent Threats (APTs) on WAMSs using attack trees. We consider the whole WAMS infrastructure, including aggregation and data collection points, such as Phasor Data Concentrators (PDCs), classical IT components, and clock synchronization. Since Smart Grids are cyber-physical systems, we consider physical perturbations, in addition to cyber attacks in our models. The models provide valuable information about the chain of cyber or physical attack steps that can be combined to build a sophisticated attack for reaching a higher goal. They assist in the assessment of physical and cyber vulnerabilities, and provide strategic guidance for the deployment of suitable countermeasures.
The software supply chain is a source of cybersecurity risk for many commercial and government organizations. Public data may be used to inform automated tools for detecting software supply chain risk during continuous integration and deployment. We link data from the National Vulnerability Database (NVD) with open version control data for the open source project OpenSSL, a widely used secure networking library that made the news when a significant vulnerability, Heartbleed, was discovered in 2014. We apply the Alhazmi-Malaiya Logistic (AML) model for software vulnerability discovery to this case. This model predicts a sigmoid cumulative vulnerability discovery function over time. Some versions of OpenSSL do not conform to the predictions of the model because they contain a temporary plateau in the cumulative vulnerability discovery plot. This temporary plateau feature is an empirical signature of a security failure mode that may be useful in future studies of software supply chain risk.
Maintaining the security and integrity of our food supply chain has emerged as a critical need. In this paper, we describe a novel authentication approach that can significantly improve the security of the food supply chain. It relies on applying nuclear quadrupole resonance (NQR) spectroscopy to authenticate the contents of packaged food products. NQR is a non-invasive, non-destructive, and quantitative radio frequency (RF) spectroscopic technique. It is sensitive to subtle features of the solid-state chemical environment such that signal properties are influenced by the manufacturing process, thus generating a manufacturer-specific watermark or intrinsic tag for the product. Such tags enable us to uniquely characterize and authenticate products of identical composition but from different manufacturers based on their NQR signal parameters. These intrinsic tags can be used to verify the integrity of a product and trace it through the supply chain. We apply a support vector machine (SVM)-based classification approach that trains the SVM with measured NQR parameters and then authenticates food products by checking their test responses. Measurement on an example substance using semi-custom hardware shows promising results (95% classification accuracy) which can be further improved with improved instrumentation.
In recent years, deep learning has achieved breakthrough results in various areas, such as computer vision, audio recognition, and natural language processing. However, just several related works have been investigated for digital multimedia forensics and steganalysis. In this paper, we design a novel CNN (convolutional neural networks) to detect audio steganography in the time domain. Unlike most existing CNN based methods which try to capture media contents, we carefully design the network layers to suppress audio content and adaptively capture the minor modifications introduced by $\pm$1 LSB based steganography. Besides, we use a mix of convolutional layer and max pooling to perform subsampling to achieve good abstraction and prevent over-fitting. In our experiments, we compared our network with six similar network architectures and two traditional methods using handcrafted features. Extensive experimental results evaluated on 40,000 speech audio clips have shown the effectiveness of the proposed convolutional network.
Hardware Trojan (HT) is one of the well known hardware security issue in research community in last one decade. HT research is mainly focused on HT detection, HT defense and designing novel HT's. HT's are inserted by an adversary for leaking secret data, denial of service attacks etc. Trojan benchmark circuits for processors, cryptography and communication protocols from Trust-hub are widely used in HT research. And power analysis based side channel attacks and designing countermeasures against side channel attacks is a well established research area. Trust-Hub provides a power based side-channel attack promoting Advanced Encryption Standard (AES) HT benchmarks for research. In this work, we analyze the strength of AES HT benchmarks in the presence well known side-channel attack countermeasures. Masking, Random delay insertion and tweaking the operating frequency of clock used in sensitive operations are applied on AES benchmarks. Simulation and power profiling studies confirm that side-channel promoting HT benchmarks are resilient against these selected countermeasures and even in the presence of these countermeasures; an adversary can get the sensitive data by triggering the HT.