Biblio
We initiate the study of a quantity that we call coordination complexity. In a distributed optimization problem, the information defining a problem instance is distributed among n parties, who need to each choose an action, which jointly will form a solution to the optimization problem. The coordination complexity represents the minimal amount of information that a centralized coordinator, who has full knowledge of the problem instance, needs to broadcast in order to coordinate the n parties to play a nearly optimal solution. We show that upper bounds on the coordination complexity of a problem imply the existence of good jointly differentially private algorithms for solving that problem, which in turn are known to upper bound the price of anarchy in certain games with dynamically changing populations. We show several results. We fully characterize the coordination complexity for the problem of computing a many-to-one matching in a bipartite graph. Our upper bound in fact extends much more generally to the problem of solving a linearly separable convex program. We also give a different upper bound technique, which we use to bound the coordination complexity of coordinating a Nash equilibrium in a routing game, and of computing a stable matching.
We consider wireless networks in which the effects of interference are determined by the SINR model. We address the question of structuring distributed communication when stations have very limited individual capabilities. In particular, nodes do not know their geographic coordinates, neighborhoods or even the size n of the network, nor can they sense collisions. Each node is equipped only with its unique name from a range \1, ..., N\. We study the following three settings and distributed algorithms for communication problems in each of them. In the uncoordinated-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm which wakes up all the nodes in O(n log2 N) rounds with high probability. In the synchronized-start case, when all the nodes simultaneously start an execution, we give a randomized algorithm that computes a backbone of the network in O(Δ log7 N) rounds with high probability. Finally, in the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone network in time O(n log2 N + Δ log7 N) with high probability.
Efficient implementation of double point multiplication is crucial for elliptic curve cryptographic systems. We propose efficient algorithms and architectures for the computation of double point multiplication on binary elliptic curves and provide a comparative analysis of their performance for 112-bit security level. To the best of our knowledge, this is the first work in the literature which considers the design and implementation of simultaneous computation of double point multiplication. We first provide algorithmics for the three main double point multiplication methods. Then, we perform data-flow analysis and propose hardware architectures for the presented algorithms. Finally, we implement the proposed state-of-the-art architectures on FPGA platform for the comparison purposes and report the area and timing results. Our results indicate that differential addition chain based algorithms are better suited to compute double point multiplication over binary elliptic curves for high performance applications.
Language-integrated query is an embedding of database queries into a host language to code queries at a higher level than the all-to-common concatenation of strings of SQL fragments. The eventually produced SQL is ensured to be well-formed and well-typed, and hence free from the embarrassing (security) problems. Language-integrated query takes advantage of the host language's functional and modular abstractions to compose and reuse queries and build query libraries. Furthermore, language-integrated query systems like T-LINQ generate efficient SQL, by applying a number of program transformations to the embedded query. Alas, the set of transformation rules is not designed to be extensible. We demonstrate a new technique of integrating database queries into a typed functional programming language, so to write well-typed, composable queries and execute them efficiently on any SQL back-end as well as on an in-memory noSQL store. A distinct feature of our framework is that both the query language as well as the transformation rules needed to generate efficient SQL are safely user-extensible, to account for many variations in the SQL back-ends, as well for domain-specific knowledge. The transformation rules are guaranteed to be type-preserving and hygienic by their very construction. They can be built from separately developed and reusable parts and arbitrarily composed into optimization pipelines. With this technique we have embedded into OCaml a relational query language that supports a very large subset of SQL including grouping and aggregation. Its types cover the complete set of intricate SQL behaviors.
With the ever increasing growth of internet usage, ensuring high security for information has gained great importance, due to the several threats in the communication channels. Hence there is continuous research towards finding a suitable approach to ensure high security for the information. In recent decades, cryptography is being used extensively for providing security on the Internet although primarily used in the military and diplomatic communities. One such approach is the application of Chaos theory in cryptosystems. In this work, we have proposed the usage of combined multiple recursive generator (CMRG) for KEY generation based on a chaotic function to generate different multiple keys. It is seen that negligible difference in parameters of chaotic function generates completely different keys as well as cipher text. The main motive for developing the chaos based cryptosystem is to attain encryption that provides high security at comparatively higher speed but with lower complexity and cost over the conventional encryption algorithms.
In this paper, we present a new secure message transmission scheme using hyperchaotic discrete primary and auxiliary chaotic systems. The novelty lies on the use of auxiliary chaotic systems for the encryption purposes. We have used the modified Henon hyperchaotic discrete-time system. The use of the auxiliary system allows generating the same keystream in the transmitter and receiver side and the initial conditions in the auxiliary systems combined with other transmitter parameters suffice the role of the key. The use of auxiliary systems will mean that the information of keystream used in the encryption function will not be present on the transmitted signal available to the intruders, hence the reconstructing of the keystream will not be possible. The encrypted message is added on to the dynamics of the transmitter using inclusion technique and the dynamical left inversion technique is employed to retrieve the unknown message. The simulation results confirm the robustness of the method used and some comments are made about the key space from the cryptographic viewpoint.
The rise of sensor-equipped smart phones has enabled a variety of classification based applications that provide personalized services based on user data extracted from sensor readings. However, malicious applications aggressively collect sensitive information from inherent user data without permissions. Furthermore, they can mine sensitive information from user data just in the classification process. These privacy threats raise serious privacy concerns. In this paper, we introduce two new privacy concerns which are inherent-data privacy and latent-data privacy. We propose a framework that enables a data-obfuscation mechanism to be developed easily. It preserves latent-data privacy while guaranteeing satisfactory service quality. The proposed framework preserves privacy against powerful adversaries who have knowledge of users' access pattern and the data-obfuscation mechanism. We validate our framework towards a real classification-orientated dataset. The experiment results confirm that our framework is superior to the basic obfuscation mechanism.
After being widely studied in theory, physical layer security schemes are getting closer to enter the consumer market. Still, a thorough practical analysis of their resilience against attacks is missing. In this work, we use software-defined radios to implement such a physical layer security scheme, namely, orthogonal blinding. To this end, we use orthogonal frequency-division multiplexing (OFDM) as a physical layer, similarly to WiFi. In orthogonal blinding, a multi-antenna transmitter overlays the data it transmits with noise in such a way that every node except the intended receiver is disturbed by the noise. Still, our known-plaintext attack can extract the data signal at an eavesdropper by means of an adaptive filter trained using a few known data symbols. Our demonstrator illustrates the iterative training process at the symbol level, thus showing the practicability of the attack.
Many applications of mobile computing require the computation of dot-product of two vectors. For examples, the dot-product of an individual's genome data and the gene biomarkers of a health center can help detect diseases in m-Health, and that of the interests of two persons can facilitate friend discovery in mobile social networks. Nevertheless, exposing the inputs of dot-product computation discloses sensitive information about the two participants, leading to severe privacy violations. In this paper, we tackle the problem of privacy-preserving dot-product computation targeting mobile computing applications in which secure channels are hardly established, and the computational efficiency is highly desirable. We first propose two basic schemes and then present the corresponding advanced versions to improve efficiency and enhance privacy-protection strength. Furthermore, we theoretically prove that our proposed schemes can simultaneously achieve privacy-preservation, non-repudiation, and accountability. Our numerical results verify the performance of the proposed schemes in terms of communication and computational overheads.
Standardization and harmonization efforts have reached a consensus towards using a special-purpose Vehicular Public-Key Infrastructure (VPKI) in upcoming Vehicular Communication (VC) systems. However, there are still several technical challenges with no conclusive answers; one such an important yet open challenge is the acquisition of short-term credentials, pseudonym: how should each vehicle interact with the VPKI, e.g., how frequently and for how long? Should each vehicle itself determine the pseudonym lifetime? Answering these questions is far from trivial. Each choice can affect both the user privacy and the system performance and possibly, as a result, its security. In this paper, we make a novel systematic effort to address this multifaceted question. We craft three generally applicable policies and experimentally evaluate the VPKI system performance, leveraging two large-scale mobility datasets. We consider the most promising, in terms of efficiency, pseudonym acquisition policies; we find that within this class of policies, the most promising policy in terms of privacy protection can be supported with moderate overhead. Moreover, in all cases, this work is the first to provide tangible evidence that the state-of-the-art VPKI can serve sizable areas or domain with modest computing resources.
Evolutionary Computation (EC) has been used with great success on various real-world problems. One domain abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography, that informally speaking considers methods how to ensure secrecy (but also authenticity, privacy, etc.), and cryptanalysis, that deals with methods how to break cryptographic systems. Although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience (therefore, omitting proofs and mathematics behind many concepts). Afterwards, we concentrate on several topics from cryptography that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC. However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization, etc) for various problems and evaluate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we will give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community. To conclude, we will present a number of topics we consider to be a strong research choice that can have a real-world impact. In that part, we give a special attention to cryptographic problems where cryptographic community successfully applied EC, but where those problems remained out of the focus of EC community. This tutorial will also present some live demos of EC in action when dealing with cryptographic problems. We will present several problems, ways of encoding solutions, impact of the algorithms choice and finally, we will run some experiments to show the results and discuss how to assess them from cryptographic perspective.
Real world wireless networks usually have diverse connectivity characteristics. Although existing works have identified replication as the key to the successful design of routing protocols for these networks, the questions of when the replication should be used, by how much, and how to distribute packet copies are still not satisfactorily answered. In this paper, we investigate the above questions and present the design of the Hybrid Routing Protocol (HRP). We make a key observation that delay correlations can significantly impact performance improvements gained from packet replication. Thus, we propose a novel model to capture the correlations of inter-contact times among a group of nodes. HRP utilizes both direct delays feedback and the proposed model to estimate the replication gain, which is then fed into a novel regret-minimization algorithm to dynamically decide the amount of packet replication under unknown network conditions. We evaluate HRP through extensive simulations. We show that HRP achieves up to 3.5x delivery ratio improvement and up to 50% delay reduction, with comparable and even lower overhead than state-of-art routing protocols.
Recently, the chaotic public-key cryptography attracts much attention of researchers, due to the great characters of chaotic maps. With the security superiorities and computation efficiencies of chaotic map over other cryptosystems, in this paper, a novel Identity-based signcryption scheme is proposed using extended chaotic maps. The difficulty of chaos-based discrete logarithm (CDL) problem lies the foundation of the security of proposed ECM-IBSC scheme.
{This paper describes application of permanent magnet on permanent magnet generator (PMG) for renewable energy power plants. Permanent magnet used are bonded hybrid magnet that was a mixture of barium ferrite magnetic powders 50 wt % and NdFeB magnetic powders 50 wt % with 15 wt % of adhesive polymer as a binder. Preparation of bonded hybrid magnets by hot press method at a pressure of 2 tons and temperature of 200°C for 15 minutes. The magnetic properties obtained were remanence induction (Br) =1.54 kG, coercivity (Hc) = 1.290 kOe, product energy maximum (BHmax) = 0.28 MGOe, surface remanence induction (Br) = 1200 gauss
The recent proliferation of human-carried mobile devices has given rise to mobile crowd sensing (MCS) systems that outsource the collection of sensory data to the public crowd equipped with various mobile devices. A fundamental issue in such systems is to effectively incentivize worker participation. However, instead of being an isolated module, the incentive mechanism usually interacts with other components which may affect its performance, such as data aggregation component that aggregates workers' data and data perturbation component that protects workers' privacy. Therefore, different from past literature, we capture such interactive effect, and propose INCEPTION, a novel MCS system framework that integrates an incentive, a data aggregation, and a data perturbation mechanism. Specifically, its incentive mechanism selects workers who are more likely to provide reliable data, and compensates their costs for both sensing and privacy leakage. Its data aggregation mechanism also incorporates workers' reliability to generate highly accurate aggregated results, and its data perturbation mechanism ensures satisfactory protection for workers' privacy and desirable accuracy for the final perturbed results. We validate the desirable properties of INCEPTION through theoretical analysis, as well as extensive simulations.
Centralized spectrum management is one of the key dynamic spectrum access (DSA) mechanisms proposed to govern the spectrum sharing between government incumbent users (IUs) and commercial secondary users (SUs). In the current centralized DSA designs, the operation data of both government IUs and commercial SUs needs to be shared with a central server. However, the operation data of government IUs is often classified information and the SU operation data may also be commercial secret. The current system design dissatisfies the privacy requirement of both IUs and SUs since the central server is not necessarily trust-worthy for holding such sensitive operation data. To address the privacy issue, this paper presents a privacy-preserving centralized DSA system (P2-SAS), which realizes the complex spectrum allocation process of DSA through efficient secure multi-party computation. In P2-SAS, none of the IU or SU operation data would be exposed to any snooping party, including the central server itself. We formally prove the correctness and privacy-preserving property of P2-SAS and evaluate its scalability and practicality using experiments based on real-world data. Experiment results show that P2-SAS can respond an SU's spectrum request in 6.96 seconds with communication overhead of less than 4 MB.
Contactless communications have become omnipresent in our daily lives, from simple access cards to electronic passports. Such systems are particularly vulnerable to relay attacks, in which an adversary relays the messages from a prover to a verifier. Distance-bounding protocols were introduced to counter such attacks. Lately, there has been a very active research trend on improving the security of these protocols, but also on ensuring strong privacy properties with respect to active adversaries and malicious verifiers. In particular, a difficult threat to address is the terrorist fraud, in which a far-away prover cooperates with a nearby accomplice to fool a verifier. The usual defence against this attack is to make it impossible for the accomplice to succeed unless the prover provides him with enough information to recover his secret key and impersonate him later on. However, the mere existence of a long-term secret key is problematic with respect to privacy. In this paper, we propose a novel approach in which the prover does not leak his secret key but a reusable session key along with a group signature on it. This allows the adversary to impersonate him even without knowing his signature key. Based on this approach, we give the first distance-bounding protocol, called SPADE, integrating anonymity, revocability and provable resistance to standard threat models.
The ever increasing interest in semantic technologies and the availability of several open knowledge sources have fueled recent progress in the field of recommender systems. In this paper we feed recommender systems with features coming from the Linked Open Data (LOD) cloud - a huge amount of machine-readable knowledge encoded as RDF statements - with the aim of improving recommender systems effectiveness. In order to exploit the natural graph-based structure of RDF data, we study the impact of the knowledge coming from the LOD cloud on the overall performance of a graph-based recommendation algorithm. In more detail, we investigate whether the integration of LOD-based features improves the effectiveness of the algorithm and to what extent the choice of different feature selection techniques influences its performance in terms of accuracy and diversity. The experimental evaluation on two state of the art datasets shows a clear correlation between the feature selection technique and the ability of the algorithm to maximize a specific evaluation metric. Moreover, the graph-based algorithm leveraging LOD-based features is able to overcome several state of the art baselines, such as collaborative filtering and matrix factorization, thus confirming the effectiveness of the proposed approach.
When multiple TCP connections are used between the same host pair, they often share a common bottleneck – especially when they are encapsulated together, e.g. in VPN scenarios. Then, all connections after the first should not have to guess the right initial value for the congestion window, but rather get the appropriate value from other connections. This allows short flows to complete much faster – but it can also lead to large bursts that cause problems on their own. Prior work used timer-based pacing methods to alleviate this problem; we introduce a new algorithm that ``paces'' packets by instead correctly maintaining the ACK clock, and show its positive impact in combination with a previously presented congestion coupling algorithm.
Bitcoin, a decentralized cryptographic currency that has experienced proliferating popularity over the past few years, is the common denominator in a wide variety of cybercrime. We perform a measurement analysis of CryptoLocker, a family of ransomware that encrypts a victim's files until a ransom is paid, within the Bitcoin ecosystem from September 5, 2013 through January 31, 2014. Using information collected from online fora, such as reddit and BitcoinTalk, as an initial starting point, we generate a cluster of 968 Bitcoin addresses belonging to CryptoLocker. We provide a lower bound for CryptoLocker's economy in Bitcoin and identify 795 ransom payments totalling 1,128.40 BTC (\$310,472.38), but show that the proceeds could have been worth upwards of \$1.1 million at peak valuation. By analyzing ransom payment timestamps both longitudinally across CryptoLocker's operating period and transversely across times of day, we detect changes in distributions and form conjectures on CryptoLocker that corroborate information from previous efforts. Additionally, we construct a network topology to detail CryptoLocker's financial infrastructure and obtain auxiliary information on the CryptoLocker operation. Most notably, we find evidence that suggests connections to popular Bitcoin services, such as Bitcoin Fog and BTC-e, and subtle links to other cybercrimes surrounding Bitcoin, such as the Sheep Marketplace scam of 2013. We use our study to underscore the value of measurement analyses and threat intelligence in understanding the erratic cybercrime landscape.
As the Internet becomes an important part of the infrastructure our society depends on, it is crucial to construct networks that are able to work even when part of the network is compromised. This paper presents the first practical intrusion-tolerant network service, targeting high-value applications such as monitoring and control of global clouds and management of critical infrastructure for the power grid. We use an overlay approach to leverage the existing IP infrastructure while providing the required resiliency and timeliness. Our solution overcomes malicious attacks and compromises in both the underlying network infrastructure and in the overlay itself. We deploy and evaluate the intrusion-tolerant overlay implementation on a global cloud spanning East Asia, North America, and Europe, and make it publicly available.
Adaptivity is an important feature of data analysis - the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated a general formal study of this problem, and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution P and a set of n independent samples x is drawn from P. We seek an algorithm that, given x as input, accurately answers a sequence of adaptively chosen ``queries'' about the unknown distribution P. How many samples n must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions towards resolving this question: We give upper bounds on the number of samples n that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. (STOC, 2015), and have been applied in subsequent work by those authors (Science, 2015; NIPS, 2015). We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries). As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that the stability notion guaranteed by differential privacy implies low generalization error. We also show that weaker stability guarantees such as bounded KL divergence and total variation distance lead to correspondingly weaker generalization guarantees.
A finite state machine (FSM) is responsible for controlling the overall functionality of most digital systems and, therefore, the security of the whole system can be compromised if there are vulnerabilities in the FSM. These vulnerabilities can be created by improper designs or by the synthesis tool which introduces additional don't-care states and transitions during the optimization and synthesis process. An attacker can utilize these vulnerabilities to perform fault injection attacks or insert malicious hardware modifications (Trojan) to gain unauthorized access to some specific states. To our knowledge, no systematic approaches have been proposed to analyze these vulnerabilities in FSM. In this paper, we develop a framework named Analyzing Vulnerabilities in FSM (AVFSM) which extracts the state transition graph (including the don't-care states and transitions) from a gate-level netlist using a novel Automatic Test Pattern Generation (ATPG) based approach and quantifies the vulnerabilities of the design to fault injection and hardware Trojan insertion. We demonstrate the applicability of the AVFSM framework by analyzing the vulnerabilities in the FSM of AES and RSA encryption module. We also propose a low-cost mitigation technique to make FSM more secure against these attacks.