Visible to the public International Conferences: Innovations in Theoretical Computer Science, Israel, 2015

SoS Newsletter- Advanced Book Block

 

 
SoS Logo

International Conferences: Innovations in Theoretical Computer Science, Israel, 2015

 

The 6th Innovations in Theoretical Computer Science (ITCS) conference, sponsored by the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT), was held at the Weizmann Institute of Science, Israel, January 11-13, 2015.  ITCS (previously known as ICS) seeks to promote research that carries a strong conceptual message (e.g., introducing a new concept or model, opening a new line of inquiry within traditional or cross-interdisciplinary areas, or introducing new techniques or new applications of known techniques). ITCS welcomes all submissions, whether aligned with current theory of computation research directions or deviating from them.


 

Pranjal Awasthi, Afonso S. Bandeira, Moses Charikar, Ravishankar Krishnaswamy, Soledad Villar, Rachel Ward; Relax, No Need to Round: Integrality of Clustering Formulations;  ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 191-200. Doi: 10.1145/2688073.2688116 We study exact recovery conditions for convex relaxations of point cloud clustering problems, focusing on two of the most common optimization problems for unsupervised clustering: k-means and k-median clustering. Motivations for focusing on convex relaxations are: (a) they come with a certificate of optimality, and (b) they are generic tools which are relatively parameter-free, not tailored to specific assumptions over the input. More precisely, we consider the distributional setting where there are k clusters in Rm and data from each cluster consists of n points sampled from a symmetric distribution within a ball of unit radius. We ask: what is the minimal separation distance between cluster centers needed for convex relaxations to exactly recover these k clusters as the optimal integral solution? For the k-median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small pairwise separation ε > O between the balls. In other words, the pairwise center separation is δ > 2+ε. Under the same distributional model, the k-means LP relaxation fails to recover such clusters at separation as large as δ = 4. Yet, if we enforce PSD constraints on the k-means LP, we get exact cluster recovery at separation as low as δ > min{2 + √2k/m}, 2+√2 + 2/m} + ε. In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the k means algorithm) can fail to recover clusters in this setting; even with arbitrarily large cluster separation, k-means++ with overseeding by any constant factor fails with high probability at exact cluster recovery. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods, and discuss several open problems which these experiments suggest.
Keywords: clustering, convex optimization, exact recovery, kmeans, kmedians (ID#: 15-5041)
URL: http://doi.acm.org/10.1145/2688073.2688116

 

Mika Göös, Toniann Pitassi, Thomas Watson; Zero-Information Protocols and Unambiguity in Arthur-Merlin Communication; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 113-122. Doi: 10.1145/2688073.2688074 We study whether information complexity can be used to attack the long-standing open problem of proving lower bounds against Arthur{Merlin (AM) communication protocols. Our starting point is to show that|in contrast to plain randomized communication complexity|every boolean function admits an AM communication protocol where on each yes- input, the distribution of Merlin's proof leaks no information about the input and moreover, this proof is unique for each outcome of Arthur's randomness. We posit that these two properties of zero information leakage and unambiguity on yes-inputs are interesting in their own right and worthy of investigation as new avenues toward AM. 

Zero-information protocols (ZAM). Our basic ZAM protocol uses exponential communication for some functions, and this raises the question of whether more efficient protocols exist. We prove that all functions in the classical space-bounded complexity classes NL and L have polynomial-communication ZAM protocols. We also prove that ZAM complexity is lower bounded by conondeterministic communication complexity. •Unambiguous protocols (UAM). Our most technically substantial result is a (n) lower bound on the UAM complexity of the NP-complete set-intersection function; the proof uses information complexity arguments in a new, indirect way and overcomes the \zero-information barrier" described above. We also prove that in general, UAM complexity is lower bounded by the classic discrepancy bound, and we give evidence that it is not generally lower bounded by the classic corruption bound.
Keywords: arthur-merlin protocols, communication complexity, information complexity (ID#: 15-5042)
URLhttp://doi.acm.org/10.1145/2688073.2688074

 

Oded Goldreich, Dana Ron; On Sample-Based Testers; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 337-345. Doi: 10.1145/2688073.2688080 The standard definition of property testing endows the tester with the ability to make arbitrary queries to "elements" of the tested object. In contrast, sample-based testers only obtain independently distributed elements (a.k.a. labeled samples) of the tested object. While sample-based testers were defined by Goldreich, Goldwasser, and Ron  JACM 1998), with few exceptions, most research in property testing is focused on query-based testers. In this work, we advance the study of sample-based property testers by providing several general positive results as well as by revealing relations between variants of this testing model. In particular:
•We show that certain types of query-based testers yield sample-based testers of sublinear sample complexity. For example, this holds for a natural class of proximity oblivious testers.
•We study the relation between distribution-free sample-based testers and one-sided error sample-based testers w.r.t. the uniform distribution. While most of this work ignores the time complexity of testing, one part of it does focus on this aspect. The main result in this part is a sublinear-time sample-based tester in the dense graphs model for k-Colorability, for any k > 2.
Keywords: property testing, sampling, sublinear algorithms (ID#: 15-5043)
URL: http://doi.acm.org/10.1145/2688073.2688080

 

Tom Gur, Ron D. Rothblum; Non-Interactive Proofs of Proximity; ITCS '15 Proceedings of the 2015  Conference on Innovations in Theoretical Computer Science, January 2015, Pages 133-142. Doi: 10.1145/2688073.2688079 We initiate a study of non-interactive proofs of proximity. These proof-systems consist of a verifier that wishes to ascertain the validity of a given statement, using a short (sublinear length) explicitly given proof, and a sublinear number of queries to its input. Since the verifier cannot even read the entire input, we only require it to reject inputs that are far from being valid. Thus, the verifier is only assured of the proximity of the statement to correct one. Such proof-systems can be viewed as the NP (or more accurately MA) analogue of property testing.  We explore both the power and limitations of non interactive proofs of proximity. We show that such proof-systems can be exponentially stronger than property testers, but are exponentially weaker than the interactive proofs of proximity studied by Rothblum, Vadhan and Wigderson (STOC 2013). In addition, we show a natural problem that has a full and (almost) tight multiplicative trade-off between the length of the proof and the verifier's query complexity. On the negative side, we also show that there exist properties for which even a linearly-long (non-interactive) proof of proximity cannot significantly reduce the query complexity.
Keywords: probabilistic proof systems, property testing (ID#: 15-5044)
URLhttp://doi.acm.org/10.1145/2688073.2688079

 

Moshe Babaioff, Moran Feldman, Moshe Tennenholtz; Mechanism Design with Strategic Mediators; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 307-316. Doi: 10.1145/2688073.2688081 We consider the problem of designing mechanisms that interact with strategic agents through strategic intermediaries (or mediators), and investigate the cost to society due to the mediators' strategic behavior. Selfish agents with private information are each associated with exactly one strategic mediator, and can interact with the mechanism exclusively through that mediator. Each mediator aims to optimize the combined utility of his agents, while the mechanism aims to optimize the combined utility of all agents. We focus on the problem of facility location on a metric induced by a publicly known tree. With non-strategic mediators, there is a dominant strategy mechanism that is optimal. We show that when both agents and mediators act strategically, there is no dominant strategy mechanism that achieves any approximation. We, thus, slightly relax the incentive constraints, and define the notion of a two-sided incentive compatible mechanism. We show that the 3-competitive deterministic mechanism suggested by Procaccia and Tennenholtz (2009) and Dekel et al. (2010) for lines extends naturally to trees, and is still 3-competitive as well as two-sided incentive compatible. This is essentially the best possible. We then show that by allowing randomization one can construct a 2-competitive randomized mechanism that is two-sided incentive compatible, and this is also essentially tight. This result also closes a gap left in the work of Procaccia and Tennenholtz (2009) and Lu et al. (2009) for the simpler problem of designing strategy-proof mechanisms for weighted agents with no mediators on a line, while extending to the more general model of trees. We also investigate a further generalization of the above setting where there are multiple levels of mediators.
Keywords: facility location, mechanism design, mediators (ID#: 15-5045)
URLhttp://doi.acm.org/10.1145/2688073.2688081

 

Mohammad Bavarian, Peter W. Shor; Information Causality, Szemerédi-Trotter and Algebraic Variants of CHSH; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 123-132. Doi: 10.1145/2688073.2688112 In this work, we consider the following family of two prover one-round games. In the CHSH_q game, two parties are given x,y in F_q uniformly at random, and each must produce an output a,b in F_q without communicating with the other. The players' objective is to maximize the probability that their outputs satisfy a+b=xy in F_q. This game was introduced by Buhrman and Massar (PRA 2005) as a large alphabet generalization of the celebrated CHSH game---which is one of the most well-studied two-prover games in quantum information theory, and which has a large number of applications to quantum cryptography and quantum complexity. Our main contributions in this paper are the first asymptotic and explicit bounds on the entangled and classical values of CHSH_q, and the realization of a rather surprising connection between CHSH_q and geometric incidence theory. On the way to these results, we also resolve a problem of Pawlowski and Winter about pairwise independent Information Causality, which, beside being interesting on its own, gives as an application a short proof of our upper bound for the entangled value of CHSH_q.
Keywords: bell inequalities and tsirelson bounds., point-line incidences, the chsh game, two player refereed games (ID#: 15-5046)
URL: http://doi.acm.org/10.1145/2688073.2688112

 

Mark Braverman, Jieming Mao;  Simulating Noisy Channel Interaction; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 21-30. Doi: 10.1145/2688073.2688087 We show that T rounds of interaction over the binary symmetric channel BSC1/2--ε with feedback can be simulated with O(ε2 T) rounds of interaction over a noiseless channel. We also introduce a more general "energy cost" model of interaction over a noisy channel. We show energy cost to be equivalent to external information complexity, which implies that our simulation results are unlikely to carry over to energy complexity. Our main technical innovation is a self-reduction from simulating a noisy channel to simulating a slightly-less-noisy channel, which may have other applications in the area of interactive compression.
Keywords: communication complexity, information complexity, noisy channel (ID#: 15-5047)
URLhttp://doi.acm.org/10.1145/2688073.2688087

 

Avrim Blum, Jamie Morgenstern, Ankit Sharma, Adam Smith; Privacy-Preserving Public Information for Sequential Games; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 173-180. Doi: 10.1145/2688073.2688100 In settings with incomplete information, players can find it difficult to coordinate to find states with good social welfare. For instance, one of the main reasons behind the recent financial crisis was found to be the lack of market transparency, which made it difficult for financial firms to accurately measure the risks and returns of their investments. Although regulators may have access to firms' investment decisions, directly reporting all firms' actions raises confidentiality concerns for both individuals and institutions. The natural question, therefore, is whether it is possible for the regulatory agencies to publish some information that, on one hand, helps the financial firms understand the risks of their investments better, and, at the same time, preserves the privacy of their investment decisions. More generally, when can the publication of privacy-preserving information about the state of the game improve overall outcomes such as social welfare? In this paper, we explore this question in a sequential resource-sharing game where the value gained by a player on choosing a resource depends on the number of other players who have chosen that resource in the past. Without any knowledge of the actions of the past players, the social welfare attained in this game can be arbitrarily bad. We show, however, that it is possible for the players to achieve good social welfare with the help of privacy-preserving, publicly-announced information. We model the behavior of players in this imperfect information setting in two ways -- greedy and undominated strategic behaviours, and we prove guarantees about the social welfare that certain kinds of privacy-preserving information can help attain. To achieve the social welfare guarantees, we design a counter with improved privacy guarantees under continual observation. In addition to the resource-sharing game, we study the main question for other games including sequential versions of the cut, machine-scheduling and cost-sharing games, and games where the value attained by a player on a particular action is not only a function of the actions of the past players but also of the actions of the future players.
Keywords: game theory, privacy (ID#: 15-5048)
URL: http://doi.acm.org/10.1145/2688073.2688100

 

Rachel Cummings, Katrina Ligett, Aaron Roth, Zhiwei Steven Wu, Juba Ziani;  Accuracy for Sale: Aggregating Data with a Variance Constraint; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 317-324. Doi: 10.1145/2688073.2688106 We consider the problem of a data analyst who may purchase an unbiased estimate of some statistic from multiple data providers. From each provider i, the analyst has a choice: she may purchase an estimate from that provider that has variance chosen from a finite menu of options. Each level of variance has a cost associated with it, reported (possibly strategically) by the data provider. The analyst wants to choose the minimum cost set of variance levels, one from each provider, that will let her combine her purchased estimators into an aggregate estimator that has variance at most some fixed desired level. Moreover, she wants to do so in such a way that incentivizes the data providers to truthfully report their costs to the mechanism. We give a dominant strategy truthful solution to this problem that yields an estimator that has optimal expected cost, and violates the variance constraint by at most an additive term that tends to zero as the number of data providers grows large.
Keywords: buying data, mechanism design, vcg mechanism (ID#: 15-5049)
URL: http://doi.acm.org/10.1145/2688073.2688106

 

Clement Louis Canonne, Venkatesan Guruswami, Raghu Meka, Madhu Sudan; Communication with Imperfectly Shared Randomness; ITCS '15 Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, January 2015, Pages 257-262. Doi: 10.1145/2688073.2688099 The communication complexity of many fundamental problems reduces greatly when the communicating parties share randomness that is independent of the inputs to the communication task. Natural communication processes (say between humans) however often involve large amounts of shared correlations among the communicating players, but rarely allow for perfect sharing of randomness. Can the communication complexity benefit from shared correlations as well as it does from shared randomness? This question was considered mainly in the context of simultaneous communication by Bavarian et al. [1]. In this work we study this problem in the standard interactive setting and give some general results. In particular, we show that every problem with communication complexity of k bits with perfectly shared randomness has a protocol using imperfectly shared randomness with complexity 2Ω(k) bits. We also show that this is best possible by exhibiting a promise problem with complexity k bits with perfectly shared randomness which requires 2Ω(k) bits when the randomness is imperfectly shared. Along the way we also highlight some other basic problems such as compression, and agreement distillation, where shared randomness plays a central role and analyze the complexity of these problems in the imperfectly shared randomness model.  The technical highlight of this work is the lower bound that goes into the result showing the tightness of our general connection. This result builds on the intuition that communication with imperfectly shared randomness needs to be less sensitive to its random inputs than communication with perfectly shared randomness. The formal proof invokes results about the small-set expansion of the noisy hypercube and an invariance principle to convert this intuition to a proof, thus giving a new application domain for these fundamental results.
Keywords: communication complexity, invariance principle, randomness (ID#: 15-5050)
URL: http://doi.acm.org/10.1145/2688073.2688099


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.