Biblio
Support vector machines (SVMs) have been widely used for classification in machine learning and data mining. However, SVM faces a huge challenge in large scale classification tasks. Recent progresses have enabled additive kernel version of SVM efficiently solves such large scale problems nearly as fast as a linear classifier. This paper proposes a new accelerated mini-batch stochastic gradient descent algorithm for SVM classification with additive kernel (AK-ASGD). On the one hand, the gradient is approximated by the sum of a scalar polynomial function for each feature dimension; on the other hand, Nesterov's acceleration strategy is used. The experimental results on benchmark large scale classification data sets show that our proposed algorithm can achieve higher testing accuracies and has faster convergence rate.
Despite significant recent advances, the effectiveness of symbolic execution is limited when used to test complex, real-world software. One of the main scalability challenges is related to constraint solving: large applications and long exploration paths lead to complex constraints, often involving big arrays indexed by symbolic expressions. In this paper, we propose a set of semantics-preserving transformations for array operations that take advantage of contextual information collected during symbolic execution. Our transformations lead to simpler encodings and hence better performance in constraint solving. The results we obtain are encouraging: we show, through an extensive experimental analysis, that our transformations help to significantly improve the performance of symbolic execution in the presence of arrays. We also show that our transformations enable the analysis of new code, which would be otherwise out of reach for symbolic execution.
Community detection is an advanced graph operation that is used to reveal tightly-knit groups of vertices (aka. communities) in real-world networks. Given the intractability of the problem, efficient heuristics are used in practice. Yet, even the best of these state-of-the-art heuristics can become computationally demanding over large inputs and can generate workloads that exhibit inherent irregularity in data movement on manycore platforms. In this paper, we posit that effective acceleration of the graph community detection operation can be achieved by reducing the cost of data movement through a combined innovation at both software and hardware levels. More specifically, we first propose an efficient software-level parallelization of community detection that uses approximate updates to cleverly exploit a diminishing returns property of the algorithm. Secondly, as a way to augment this innovation at the software layer, we design an efficient Wireless Network on Chip (WiNoC) architecture that is suited to handle the irregular on-chip data movements exhibited by the community detection algorithm under both unicast- and broadcast-heavy cache coherence protocols. Experimental results show that our resulting WiNoC-enabled manycore platform achieves on average 52% savings in execution time, without compromising on the quality of the outputs, when compared to a traditional manycore platform designed with a wireline mesh NoC and running community detection without employing approximate updates.
There are many everyday situations in which users need to enter their user identification (user ID), such as logging in to computer systems and entering secure offices. In such situations, contactless passive IC cards are convenient because users can input their user ID simply by passing the card over a reader. However, these cards cannot be used for successive interactions. To address this issue, we propose AccelTag, a contactless IC card equipped with an acceleration sensor and a liquid crystal display (LCD). AccelTag utilizes high-function RFID technology so that the acceleration sensor and the LCD can also be driven by a wireless power supply. With its built-in acceleration sensor, AccelTag can acquire its direction and movement when it is waved over the reader. We demonstrate several applications using AccelTag, such as displaying several types of information in the card depending on the user's requirements.
Damgard et al. proposed a new primitive called access control encryption (ACE) [6] which not only protects the privacy of the message, but also controls the ability of the sender to send the message. We will give a new construction based on the Learning with Error (LWE) assumption [12], which is one of the two open problems in [6]. Although there are many public key encryption schemes based on LWE and supporting homomorphic operations. We find that not every scheme can be used to build ACE. In order to keep the security and correctness of ACE, the random constant chosen by the sanitizer should satisfy stricter condition. We also give a different security proof of ACE based on LWE from it based on DDH. We will see that although the modulus of LWE should be super-polynomial, the ACE scheme is still as secure as the general public key encryption scheme based on the lattice [5].
This paper outlines the IoT Databox model as a means of making the Internet of Things (IoT) accountable to individuals. Accountability is a key to building consumer trust and mandated in data protection legislation. We briefly outline the `external' data subject accountability requirement specified in actual legislation in Europe and proposed legislation in the US, and how meeting requirement this turns on surfacing the invisible actions and interactions of connected devices and the social arrangements in which they are embedded. The IoT Databox model is proposed as an in principle means of enabling accountability and providing individuals with the mechanisms needed to build trust in the IoT.
Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.
Membership revocation is essential for cryptographic applications, from traditional PKIs to group signatures and anonymous credentials. Of the various solutions for the revocation problem that have been explored, dynamic accumulators are one of the most promising. We propose Braavos, a new, RSA-based, dynamic accumulator. It has optimal communication complexity and, when combined with efficient zero-knowledge proofs, provides an ideal solution for anonymous revocation. For the construction of Braavos we use a modular approach: we show how to build an accumulator with better functionality and security from accumulators with fewer features and weaker security guarantees. We then describe an anonymous revocation component (ARC) that can be instantiated using any dynamic accumulator. ARC can be added to any anonymous system, such as anonymous credentials or group signatures, in order to equip it with a revocation functionality. Finally, we implement ARC with Braavos and plug it into Idemix, the leading implementation of anonymous credentials. This work resolves, for the first time, the problem of practical revocation for anonymous credential systems.
Interconnect opens are known to be one of the predominant defects in nanoscale technologies. Automatic test pattern generation for open faults is challenging, because of their rather unstable behavior and the numerous electrical parameters which need to be considered. Thus, most approaches try to avoid accurate modeling of all constraints like the influence of the aggressors on the open net and use simplified fault models in order to detect as many faults as possible or make assumptions which decrease both complexity and accuracy. Yet, this leads to the problem that not only generated tests may be invalidated but also the localization of a specific fault may fail - in case such a model is used as basis for diagnosis. Furthermore, most of the models do not consider the problem of oscillating behavior, caused by feedback introduced by coupling capacitances, which occurs in almost all designs. In [1], the Robust Enhanced Aggressor Victim Model (REAV) and in [2] an extension to address the problem of oscillating behavior were introduced. The resulting model does not only consider the influence of all aggressors accurately but also guarantees robustness against oscillating behavior as well as process variations affecting the thresholds of gates driven by an open interconnect. In this work we present the first diagnostic classification algorithm for this model. This algorithm considers all constraints enforced by the REAV model accurately - and hence handles unknown values as well as oscillating behavior. In addition, it allows to distinguish faults at the same interconnect and thus reducing the area that has to be considered for physical failure analysis. Experimental results show the high efficiency of the new method handling circuits with up to 500,000 non-equivalent faults and considerably increasing the diagnostic resolution.
Nowadays wireless networks are fast, becoming more secure than their wired counterparts. Recent technological advances in wireless networking, IC fabrication and sensor technology have lead to the emergence of millimetre scale devices that collectively form a Wireless Sensor Network (WSN) and are radically changing the way in which we sense, process and transport signals of interest. They are increasingly become viable solutions to many challenging problems and will successively be deployed in many areas in the future such as in environmental monitoring, business, and military applications. However, deploying new technology, without security in mind has often proved to be unreasonably dangerous. This also applies to WSNs, especially those used in applications that monitor sensitive information (e.g., health care applications). There have been significant contributions to overcome many weaknesses in sensor networks like coverage problems, lack in power and making best use of limited network bandwidth, however; work in sensor network security is still in its infancy stage. Security in WSNs presents several well-known challenges stemming from all kinds of resource constraints of individual sensors. The problem of securing these networks emerges more and more as a hot topic. Symmetric key cryptography is commonly seen as infeasible and public key cryptography has its own key distribution problem. In contrast to this prejudice, this paper presents a new symmetric encryption standard algorithm which is an extension of the previous work of the authors i.e. UES version-II and III. Roy et al recently developed few efficient encryption methods such as UES version-I, Modified UES-I, UES version-II, UES version-III. The algorithm is named as Ultra Encryption Standard version — IV algorithm. It is a Symmetric key Cryptosystem which includes multiple encryption, bit-wise reshuffling method and bit-wise columnar transposition method. In the present - ork the authors have performed the encryption process at the bit-level to achieve greater strength of encryption. The proposed method i.e. UES-IV can be used to encrypt short message, password or any confidential key.
We consider the problem of privacy-preserving data aggregation in a star network topology, i.e., several untrusting participants connected to a single aggregator. We require that the participants do not discover each other's data, and the service provider remains oblivious to each participant's individual contribution. Furthermore, the final result is to be published in a differentially private manner, i.e., the result should not reveal the contribution of any single participant to a (possibly external) adversary who knows the contributions of all other participants. In other words, we require a secure multiparty computation protocol that also incorporates a differentially private mechanism. Previous solutions have resorted to caveats such as postulating a trusted dealer to distribute keys to the participants, or introducing additional entities to withhold the decryption key from the aggregator, or relaxing the star topology by allowing pairwise communication amongst the participants. In this paper, we show how to obtain a noisy (differentially private) aggregation result using Shamir secret sharing and additively homomorphic encryption without these mitigating assumptions. More importantly, while we assume semi-honest participants, we allow the aggregator to be stronger than semi-honest, specifically in the sense that he can try to reduce the noise in the differentially private result. To respect the differential privacy requirement, collusions of mutually untrusting entities need to be analyzed differently from traditional secure multiparty computation: It is not sufficient that such collusions do not reveal the data of honest participants; we must also ensure that the colluding entities cannot undermine differential privacy by reducing the amount of noise in the final result. Our protocols avoid this by requiring that no entity – neither the aggregator nor any participant – knows how much noise a participant contributes to the final result. We also ensure that if a cheating aggregator tries to influence the noise term in the differentially private output, he can be detected with overwhelming probability.
Smart Card has complications with validation and transmission process. Therefore, by using peeping attack, the secret code was stolen and secret filming while entering Personal Identification Number at the ATM machine. We intend to develop an authentication system to banks that protects the asset of user's. The data of a user is to be ensured that secure and isolated from the data leakage and other attacks Therefore, we propose a system, where ATM machine will have a QR code in which the information's are encrypted corresponding to the ATM machine and a mobile application in the customer's mobile which will decrypt the encoded QR information and sends the information to the server and user's details are displayed in the ATM machine and transaction can be done. Now, the user securely enters information to transfer money without risk of peeping attack in Automated Teller Machine by just scanning the QR code at the ATM by mobile application. Here, both the encryption and decryption technique are carried out by using Triple DES Algorithm (Data Encryption Standard).
Mimicking the collaborative behavior of biological swarms, such as bird flocks and ant colonies, Swarm Intelligence algorithms provide efficient solutions for various optimization problems. On the other hand, a computational model of the human brain, spiking neural networks, has been showing great promise in recognition, inference, and learning, due to recent emergence of neuromorphic hardware for high-efficient and low-power computing. Through bridging these two distinct research fields, we propose a novel computing paradigm that implements the swarm intelligence with a population of coupled spiking neural oscillators in basic leaky integrate-and-fire (LIF) model. Our model behaves as a meta-heuristic searching conducted by multiple collaborative agents. In this design, the oscillating neurons serve as agents in the swarm, search for solutions in frequency coding and communicate with each other through spikes. The firing rate of each agent is adaptive to other agents with better solutions and the optimal solution is rendered as the swarm synchronization is reached. We apply the proposed method to the parameter optimization in several test objective functions and demonstrate its effectiveness and efficiency. Our new computing paradigm expands the computational power of coupled spiking neurons in the field of solving optimization problem and brings opportunities for the connection between individual intelligence and swarm intelligence.
Active authentication is the problem of continuously verifying the identity of a person based on behavioral aspects of their interaction with a computing device. In this paper, we collect and analyze behavioral biometrics data from 200 subjects, each using their personal Android mobile device for a period of at least 30 days. This data set is novel in the context of active authentication due to its size, duration, number of modalities, and absence of restrictions on tracked activity. The geographical colocation of the subjects in the study is representative of a large closed-world environment such as an organization where the unauthorized user of a device is likely to be an insider threat: coming from within the organization. We consider four biometric modalities: 1) text entered via soft keyboard, 2) applications used, 3) websites visited, and 4) physical location of the device as determined from GPS (when outdoors) or WiFi (when indoors). We implement and test a classifier for each modality and organize the classifiers as a parallel binary decision fusion architecture. We are able to characterize the performance of the system with respect to intruder detection time and to quantify the contribution of each modality to the overall performance.
Labeled datasets are always limited, and oftentimes the quantity of labeled data is a bottleneck for data analytics. This especially affects supervised machine learning methods, which require labels for models to learn from the labeled data. Active learning algorithms have been proposed to help achieve good analytic models with limited labeling efforts, by determining which additional instance labels will be most beneficial for learning for a given model. Active learning is consistent with interactive analytics as it proceeds in a cycle in which the unlabeled data is automatically explored. However, in active learning users have no control of the instances to be labeled, and for text data, the annotation interface is usually document only. Both of these constraints seem to affect the performance of an active learning model. We hypothesize that visualization techniques, particularly interactive ones, will help to address these constraints. In this paper, we implement a pilot study of visualization in active learning for text classification, with an interactive labeling interface. We compare the results of three experiments. Early results indicate that visualization improves high-performance machine learning model building with an active learning algorithm.
With the development of smart grid, information and energy integrate deeply. For remote monitoring and cluster management, SCADA system of wind farm should be connected to Internet. However, communication security and operation risk put forward a challenge to data network of the wind farm. To address this problem, an active security defense strategy combined whitelist and security situation assessment is proposed. Firstly, the whitelist is designed by analyzing the legitimate packet of Modbus on communication of SCADA servers and PLCs. Then Knowledge Automation is applied to establish the Decision Requirements Diagram (DRD) for wind farm security. The D-S evidence theory is adopted to assess operation situation of wind farm and it together with whitelist offer the security decision for wind turbine. This strategy helps to eliminate the wind farm owners' security concerns of data networking, and improves the integrity of the cyber security defense for wind farm.
Tamper detection circuits provide the first and most important defensive wall in protecting electronic modules containing security data. A widely used procedure is to cover the entire module with a foil containing fine conductive mesh, which detects intrusion attempts. Detection circuits are further classified as passive or active. Passive circuits have the advantage of low power consumption, however they are unable to detect small variations in the conductive mesh parameters. Since modern tools provide an upper leverage over the passive method, the most efficient way to protect security modules is thus to use active circuits. The active tamper detection circuits are typically probing the conductive mesh with short pulses, analyzing its response in terms of delay and shape. The method proposed in this paper generates short pulses at one end of the mesh and analyzes the response at the other end. Apart from measuring pulse delay, the analysis includes a frequency domain characterization of the system, determining whether there has been an intrusion or not, by comparing it to a reference (un-tampered with) spectrum. The novelty of this design is the combined analysis, in time and frequency domains, of the small variations in mesh characteristic parameters.
In this paper, we propose a novel adaptive control architecture for addressing security and safety in cyber-physical systems subject to exogenous disturbances. Specifically, we develop an adaptive controller for time-invariant, state-dependent adversarial sensor and actuator attacks in the face of stochastic exogenous disturbances. We show that the proposed controller guarantees uniform ultimate boundedness of the closed-loop dynamical system in a mean-square sense. We further discuss the practicality of the proposed approach and provide a numerical example involving the lateral directional dynamics of an aircraft to illustrate the efficacy of the proposed adaptive control architecture.
Differential privacy is an approach that preserves patient privacy while permitting researchers access to medical data. This paper presents mechanisms proposed to satisfy differential privacy while answering a given workload of range queries. Representing input data as a vector of counts, these methods partition the vector according to relationships between the data and the ranges of the given queries. After partitioning the vector into buckets, the counts of each bucket are estimated privately and split among the bucket's positions to answer the given query set. The performance of the proposed method was evaluated using different workloads over several attributes. The results show that partitioning the vector based on the data can produce more accurate answers, while partitioning the vector based on the given workload improves privacy. This paper's two main contributions are: (1) improving earlier work on partitioning mechanisms by building a greedy algorithm to partition the counts' vector efficiently, and (2) its adaptive algorithm considers the sensitivity of the given queries before providing results.