Biblio
The rise of sensor-equipped smart phones has enabled a variety of classification based applications that provide personalized services based on user data extracted from sensor readings. However, malicious applications aggressively collect sensitive information from inherent user data without permissions. Furthermore, they can mine sensitive information from user data just in the classification process. These privacy threats raise serious privacy concerns. In this paper, we introduce two new privacy concerns which are inherent-data privacy and latent-data privacy. We propose a framework that enables a data-obfuscation mechanism to be developed easily. It preserves latent-data privacy while guaranteeing satisfactory service quality. The proposed framework preserves privacy against powerful adversaries who have knowledge of users' access pattern and the data-obfuscation mechanism. We validate our framework towards a real classification-orientated dataset. The experiment results confirm that our framework is superior to the basic obfuscation mechanism.
After being widely studied in theory, physical layer security schemes are getting closer to enter the consumer market. Still, a thorough practical analysis of their resilience against attacks is missing. In this work, we use software-defined radios to implement such a physical layer security scheme, namely, orthogonal blinding. To this end, we use orthogonal frequency-division multiplexing (OFDM) as a physical layer, similarly to WiFi. In orthogonal blinding, a multi-antenna transmitter overlays the data it transmits with noise in such a way that every node except the intended receiver is disturbed by the noise. Still, our known-plaintext attack can extract the data signal at an eavesdropper by means of an adaptive filter trained using a few known data symbols. Our demonstrator illustrates the iterative training process at the symbol level, thus showing the practicability of the attack.
In Pixar's Finding Dory, we are introduced to a new character: Hank the Octopus. This is a very different character than Pixar has been asked to animate before. Our directors demanded both precise control and graceful, clean silhouettes. The reference artwork we were given showed complex curves between arms and body without any disjointed shapes or breaks in form. Video of Octopus in motion reveals an infinitely malleable creature capable of an enormous shape language. This art direction required a small group of TDs to create a control scheme that was sensible, flexible and with a new level of control in order for animators to bring Hank to life. We had to think deeply from the tips of the fingers all the way through how the tentacles connect to the mouth corners, and eye sockets. Each of this issues raised concerns around design, deformation and finally how the end user can manipulate such complexity effectively.
Many applications of mobile computing require the computation of dot-product of two vectors. For examples, the dot-product of an individual's genome data and the gene biomarkers of a health center can help detect diseases in m-Health, and that of the interests of two persons can facilitate friend discovery in mobile social networks. Nevertheless, exposing the inputs of dot-product computation discloses sensitive information about the two participants, leading to severe privacy violations. In this paper, we tackle the problem of privacy-preserving dot-product computation targeting mobile computing applications in which secure channels are hardly established, and the computational efficiency is highly desirable. We first propose two basic schemes and then present the corresponding advanced versions to improve efficiency and enhance privacy-protection strength. Furthermore, we theoretically prove that our proposed schemes can simultaneously achieve privacy-preservation, non-repudiation, and accountability. Our numerical results verify the performance of the proposed schemes in terms of communication and computational overheads.
Standardization and harmonization efforts have reached a consensus towards using a special-purpose Vehicular Public-Key Infrastructure (VPKI) in upcoming Vehicular Communication (VC) systems. However, there are still several technical challenges with no conclusive answers; one such an important yet open challenge is the acquisition of short-term credentials, pseudonym: how should each vehicle interact with the VPKI, e.g., how frequently and for how long? Should each vehicle itself determine the pseudonym lifetime? Answering these questions is far from trivial. Each choice can affect both the user privacy and the system performance and possibly, as a result, its security. In this paper, we make a novel systematic effort to address this multifaceted question. We craft three generally applicable policies and experimentally evaluate the VPKI system performance, leveraging two large-scale mobility datasets. We consider the most promising, in terms of efficiency, pseudonym acquisition policies; we find that within this class of policies, the most promising policy in terms of privacy protection can be supported with moderate overhead. Moreover, in all cases, this work is the first to provide tangible evidence that the state-of-the-art VPKI can serve sizable areas or domain with modest computing resources.
Evolutionary Computation (EC) has been used with great success on various real-world problems. One domain abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography, that informally speaking considers methods how to ensure secrecy (but also authenticity, privacy, etc.), and cryptanalysis, that deals with methods how to break cryptographic systems. Although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience (therefore, omitting proofs and mathematics behind many concepts). Afterwards, we concentrate on several topics from cryptography that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization, etc) for various problems and evaluate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we will give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community. To conclude, we will present a number of topics we consider to be a strong research choice that can have a real-world impact. In that part, we give a special attention to cryptographic problems where cryptographic community successfully applied EC, but where those problems remained out of the focus of EC community. This tutorial will also present some live demos of EC in action when dealing with cryptographic problems. We will present several problems, ways of encoding solutions, impact of the algorithms choice and finally, we will run some experiments to show the results and discuss how to assess them from cryptographic perspective.
Multi-objective evolutionary algorithms (MOEAs) based on decomposition are aggregation-based algorithms which transform a multi-objective optimization problem (MOP) into several single-objective subproblems. Being effective, efficient, and easy to implement, Particle Swarm Optimization (PSO) has become one of the most popular single-objective optimizers for continuous problems, and recently it has been successfully extended to the multi-objective domain. However, no investigation on the application of PSO within a multi-objective decomposition framework exists in the context of combinatorial optimization. This is precisely the focus of the paper. More specifically, we study the incorporation of Geometric Particle Swarm Optimization (GPSO), a discrete generalization of PSO that has proven successful on a number of single-objective combinatorial problems, into a decomposition approach. We conduct experiments on many-objective 1/0 knapsack problems i.e. problems with more than three objectives functions, substantially harder than multi-objective problems with fewer objectives. The results indicate that the proposed multi-objective GPSO based on decomposition is able to outperform two version of the well-know MOEA based on decomposition (MOEA/D) and the most recent version of the non-dominated sorting genetic algorithm (NSGA-III), which are state-of-the-art multi-objec\textbackslash-tive evolutionary approaches based on decomposition.
Real world wireless networks usually have diverse connectivity characteristics. Although existing works have identified replication as the key to the successful design of routing protocols for these networks, the questions of when the replication should be used, by how much, and how to distribute packet copies are still not satisfactorily answered. In this paper, we investigate the above questions and present the design of the Hybrid Routing Protocol (HRP). We make a key observation that delay correlations can significantly impact performance improvements gained from packet replication. Thus, we propose a novel model to capture the correlations of inter-contact times among a group of nodes. HRP utilizes both direct delays feedback and the proposed model to estimate the replication gain, which is then fed into a novel regret-minimization algorithm to dynamically decide the amount of packet replication under unknown network conditions. We evaluate HRP through extensive simulations. We show that HRP achieves up to 3.5x delivery ratio improvement and up to 50% delay reduction, with comparable and even lower overhead than state-of-art routing protocols.
Recently, the chaotic public-key cryptography attracts much attention of researchers, due to the great characters of chaotic maps. With the security superiorities and computation efficiencies of chaotic map over other cryptosystems, in this paper, a novel Identity-based signcryption scheme is proposed using extended chaotic maps. The difficulty of chaos-based discrete logarithm (CDL) problem lies the foundation of the security of proposed ECM-IBSC scheme.
{This paper describes application of permanent magnet on permanent magnet generator (PMG) for renewable energy power plants. Permanent magnet used are bonded hybrid magnet that was a mixture of barium ferrite magnetic powders 50 wt % and NdFeB magnetic powders 50 wt % with 15 wt % of adhesive polymer as a binder. Preparation of bonded hybrid magnets by hot press method at a pressure of 2 tons and temperature of 200°C for 15 minutes. The magnetic properties obtained were remanence induction (Br) =1.54 kG, coercivity (Hc) = 1.290 kOe, product energy maximum (BHmax) = 0.28 MGOe, surface remanence induction (Br) = 1200 gauss
The recent proliferation of human-carried mobile devices has given rise to mobile crowd sensing (MCS) systems that outsource the collection of sensory data to the public crowd equipped with various mobile devices. A fundamental issue in such systems is to effectively incentivize worker participation. However, instead of being an isolated module, the incentive mechanism usually interacts with other components which may affect its performance, such as data aggregation component that aggregates workers' data and data perturbation component that protects workers' privacy. Therefore, different from past literature, we capture such interactive effect, and propose INCEPTION, a novel MCS system framework that integrates an incentive, a data aggregation, and a data perturbation mechanism. Specifically, its incentive mechanism selects workers who are more likely to provide reliable data, and compensates their costs for both sensing and privacy leakage. Its data aggregation mechanism also incorporates workers' reliability to generate highly accurate aggregated results, and its data perturbation mechanism ensures satisfactory protection for workers' privacy and desirable accuracy for the final perturbed results. We validate the desirable properties of INCEPTION through theoretical analysis, as well as extensive simulations.
Particle Swarm Optimization (PSO) has been shown to perform very well on a wide range of optimization problems. One of the drawbacks to PSO is that the base algorithm assumes continuous variables. In this paper, we present a version of PSO that is able to optimize over discrete variables. This new PSO algorithm, which we call Integer and Categorical PSO (ICPSO), incorporates ideas from Estimation of Distribution Algorithms (EDAs) in that particles represent probability distributions rather than solution values, and the PSO update modifies the probability distributions. In this paper, we describe our new algorithm and compare its performance against other discrete PSO algorithms. In our experiments, we demonstrate that our algorithm outperforms comparable methods on both discrete benchmark functions and NK landscapes, a mathematical framework that generates tunable fitness landscapes for evaluating EAs.
Particle swarm optimization (PSO) has been considered as a very efficient swarm intelligence technique used to solve many problems, such as those related to Constraint reasoning in particular Constraint Satisfaction Problems (CSPs). In this paper, we introduce a new PSO method for solving Maximal Satisfaction Problems Max-CSPs, which belong to CSPs extensions. Our approach is based on a combination between two concepts: double guidance by both template concept and min-conflict heuristic, and the Triggered mutation proposed by Zhou and Tan. This new proposed approach avoids premature stagnation process in order to improve Max-CSPs solution quality. We resort to the high parallel computing insofar as it has shown high performances in several fields, using GPU architecture as a parallel computing framework. The experimental results, presented at the end, show the efficiency of the introduced technique in the resolution of large size Max-CSPs.
Centralized spectrum management is one of the key dynamic spectrum access (DSA) mechanisms proposed to govern the spectrum sharing between government incumbent users (IUs) and commercial secondary users (SUs). In the current centralized DSA designs, the operation data of both government IUs and commercial SUs needs to be shared with a central server. However, the operation data of government IUs is often classified information and the SU operation data may also be commercial secret. The current system design dissatisfies the privacy requirement of both IUs and SUs since the central server is not necessarily trust-worthy for holding such sensitive operation data. To address the privacy issue, this paper presents a privacy-preserving centralized DSA system (P2-SAS), which realizes the complex spectrum allocation process of DSA through efficient secure multi-party computation. In P2-SAS, none of the IU or SU operation data would be exposed to any snooping party, including the central server itself. We formally prove the correctness and privacy-preserving property of P2-SAS and evaluate its scalability and practicality using experiments based on real-world data. Experiment results show that P2-SAS can respond an SU's spectrum request in 6.96 seconds with communication overhead of less than 4 MB.
One way of evaluating the reusability of a test collection is to determine whether removing the unique contributions of some system would alter the preference order between that system and others. Rank correlation measures such as Kendall's tau are often used for this purpose. Rank correlation measures are appropriate for ordinal measures in which only preference order is important, but many evaluation measures produce system scores in which both the preference order and the magnitude of the score difference are important. Such measures are referred to as interval. Pearson's rho offers one way in which correlation can be computed over results from an interval measure such that smaller errors in the gap size are preferred. When seeking to improve over existing systems, we care the most about comparisons among the best systems. For that purpose we prefer head-weighed measures such as tau\_AP, which is designed for ordinal data. No present head weighted measure fully leverages the information present in interval effectiveness measures. This paper introduces such a measure, referred to as Pearson Rank.
Contactless communications have become omnipresent in our daily lives, from simple access cards to electronic passports. Such systems are particularly vulnerable to relay attacks, in which an adversary relays the messages from a prover to a verifier. Distance-bounding protocols were introduced to counter such attacks. Lately, there has been a very active research trend on improving the security of these protocols, but also on ensuring strong privacy properties with respect to active adversaries and malicious verifiers. In particular, a difficult threat to address is the terrorist fraud, in which a far-away prover cooperates with a nearby accomplice to fool a verifier. The usual defence against this attack is to make it impossible for the accomplice to succeed unless the prover provides him with enough information to recover his secret key and impersonate him later on. However, the mere existence of a long-term secret key is problematic with respect to privacy. In this paper, we propose a novel approach in which the prover does not leak his secret key but a reusable session key along with a group signature on it. This allows the adversary to impersonate him even without knowing his signature key. Based on this approach, we give the first distance-bounding protocol, called SPADE, integrating anonymity, revocability and provable resistance to standard threat models.
The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.
When multiple TCP connections are used between the same host pair, they often share a common bottleneck – especially when they are encapsulated together, e.g. in VPN scenarios. Then, all connections after the first should not have to guess the right initial value for the congestion window, but rather get the appropriate value from other connections. This allows short flows to complete much faster – but it can also lead to large bursts that cause problems on their own. Prior work used timer-based pacing methods to alleviate this problem; we introduce a new algorithm that ``paces'' packets by instead correctly maintaining the ACK clock, and show its positive impact in combination with a previously presented congestion coupling algorithm.
Social animals as found in fish schools, bird flocks, bee hives, and ant colonies are able to solve highly complex problems in nature. This includes foraging for food, constructing astonishingly complex nests, and evading or defending against predators. Remarkably, these animals in many cases use very simple, decentralized communication mechanisms that do not require a single leader. This makes the animals perform surprisingly well, even in dynamically changing environments. The collective intelligence of such animals is known as swarm intelligence and it has inspired popular and very powerful optimization paradigms, including ant colony optimization (ACO) and particle swarm optimization (PSO). The reasons behind their success are often elusive. We are just beginning to understand when and why swarm intelligence algorithms perform well, and how to use swarm intelligence most effectively. Understanding the fundamental working principles that determine their efficiency is a major challenge. This tutorial will give a comprehensive overview of recent theoretical results on swarm intelligence algorithms, with an emphasis on their efficiency (runtime/computational complexity). In particular, the tutorial will show how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms. The results shed light on the working principles of swarm intelligence algorithms, identify the impact of parameters and other design choices on performance, and thus help to use swarm intelligence more effectively. The tutorial will be divided into a first, larger part on ACO and a second, smaller part on PSO. For ACO we will consider simple variants of the MAX-MIN ant system. Investigations of example functions in pseudo-Boolean optimization demonstrate that the choices of the pheromone update strategy and the evaporation rate have a drastic impact on the running time. We further consider the performance of ACO on illustrative problems from combinatorial optimization: constructing minimum spanning trees, solving shortest path problems with and without noise, and finding short tours for the TSP. For particle swarm optimization, the tutorial will cover results on PSO for pseudo-Boolean optimization as well as a discussion of theoretical results in continuous spaces.
The validation of simulation models (e.g., of electronic control units for vehicles) in industry is becoming increasingly challenging due to their growing complexity. To systematically assess the quality of such models, software metrics seem to be promising. In this paper we explore the use of software metrics and outlier analysis as a means to assess the quality of model-based software. More specifically, we investigate how results from regression analysis applied to measurement data received from size and complexity metrics can be mapped to software quality. Using the moving averages approach, models were fit to data received from over 65,000 software revisions for 71 simulation models that represent different electronic control units of real premium vehicles. Consecutive investigations using studentized deleted residuals and Cook’s Distance revealed outliers among the measurements. From these outliers we identified a subset, which provides meaningful information (anomalies) by comparing outlier scores with expert opinions. Eight engineers were interviewed separately for outlier impact on software quality. Findings were validated in consecutive workshops. The results show correlations between outliers and their impact on four of the considered quality characteristics. They also demonstrate the applicability of this approach in industry.
Bitcoin, a decentralized cryptographic currency that has experienced proliferating popularity over the past few years, is the common denominator in a wide variety of cybercrime. We perform a measurement analysis of CryptoLocker, a family of ransomware that encrypts a victim's files until a ransom is paid, within the Bitcoin ecosystem from September 5, 2013 through January 31, 2014. Using information collected from online fora, such as reddit and BitcoinTalk, as an initial starting point, we generate a cluster of 968 Bitcoin addresses belonging to CryptoLocker. We provide a lower bound for CryptoLocker's economy in Bitcoin and identify 795 ransom payments totalling 1,128.40 BTC (\$310,472.38), but show that the proceeds could have been worth upwards of \$1.1 million at peak valuation. By analyzing ransom payment timestamps both longitudinally across CryptoLocker's operating period and transversely across times of day, we detect changes in distributions and form conjectures on CryptoLocker that corroborate information from previous efforts. Additionally, we construct a network topology to detail CryptoLocker's financial infrastructure and obtain auxiliary information on the CryptoLocker operation. Most notably, we find evidence that suggests connections to popular Bitcoin services, such as Bitcoin Fog and BTC-e, and subtle links to other cybercrimes surrounding Bitcoin, such as the Sheep Marketplace scam of 2013. We use our study to underscore the value of measurement analyses and threat intelligence in understanding the erratic cybercrime landscape.
As the Internet becomes an important part of the infrastructure our society depends on, it is crucial to construct networks that are able to work even when part of the network is compromised. This paper presents the first practical intrusion-tolerant network service, targeting high-value applications such as monitoring and control of global clouds and management of critical infrastructure for the power grid. We use an overlay approach to leverage the existing IP infrastructure while providing the required resiliency and timeliness. Our solution overcomes malicious attacks and compromises in both the underlying network infrastructure and in the overlay itself. We deploy and evaluate the intrusion-tolerant overlay implementation on a global cloud spanning East Asia, North America, and Europe, and make it publicly available.
Suppose that you have n truly random bits x=(x1,…,xn) and you wish to use them to generate m≫ n pseudorandom bits y=(y1,…, ym) using a local mapping, i.e., each yi should depend on at most d=O(1) bits of x. In the polynomial regime of m=ns, stextgreater1, the only known solution, originates from (Goldreich, ECCC 2000), is based on Random Local Functions: Compute yi by applying some fixed (public) d-ary predicate P to a random (public) tuple of distinct inputs (xi1,…,xid). Our goal in this paper is to understand, for any value of s, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate. We derive the following results: (1) We show that pseudorandomness against F2-linear adversaries (i.e., the distribution y has low-bias) is achieved if the predicate is (a) k=Ω(s)-resilience, i.e., uncorrelated with any k-subset of its inputs, and (b) has algebraic degree of Ω(s) even after fixing Ω(s) of its inputs. We also show that these requirements are necessary, and so they form a tight characterization (up to constants) of security against linear attacks. Our positive result shows that a d-local low-bias generator can have output length of nΩ(d), answering an open question of Mossel, Shpilka and Trevisan (FOCS, 2003). Our negative result shows that a candidate for pseudorandom generator proposed by the first author (computational complexity, 2015) and by O’Donnell and Witmer (CCC 2014) is insecure. We use similar techniques to refute a conjecture of Feldman, Perkins and Vempala (STOC 2015) regarding the hardness of planted constraint satisfaction problems. (2) Motivated by the cryptanalysis literature, we consider security against algebraic attacks. We provide the first theoretical treatment of such attacks by formalizing a general notion of algebraic inversion and distinguishing attacks based on the Polynomial Calculus proof system. We show that algebraic attacks succeed if and only if there exist a degree e=O(s) non-zero polynomial Q whose roots cover the roots of P or cover the roots of P’s complement. As a corollary, we obtain the first example of a predicate P for which the generated sequence y passes all linear tests but fails to pass some polynomial-time computable test, answering an open question posed by the first author (Question 4.9, computational complexity 2015).