Visible to the public Biblio

Filters: Keyword is Analogies and Transference  [Clear All Filters]
2019-11-25
Zuin, Gianlucca, Chaimowicz, Luiz, Veloso, Adriano.  2018.  Learning Transferable Features For Open-Domain Question Answering. 2018 International Joint Conference on Neural Networks (IJCNN). :1–8.

Corpora used to learn open-domain Question-Answering (QA) models are typically collected from a wide variety of topics or domains. Since QA requires understanding natural language, open-domain QA models generally need very large training corpora. A simple way to alleviate data demand is to restrict the domain covered by the QA model, leading thus to domain-specific QA models. While learning improved QA models for a specific domain is still challenging due to the lack of sufficient training data in the topic of interest, additional training data can be obtained from related topic domains. Thus, instead of learning a single open-domain QA model, we investigate domain adaptation approaches in order to create multiple improved domain-specific QA models. We demonstrate that this can be achieved by stratifying the source dataset, without the need of searching for complementary data unlike many other domain adaptation approaches. We propose a deep architecture that jointly exploits convolutional and recurrent networks for learning domain-specific features while transferring domain-shared features. That is, we use transferable features to enable model adaptation from multiple source domains. We consider different transference approaches designed to learn span-level and sentence-level QA models. We found that domain-adaptation greatly improves sentence-level QA performance, and span-level QA benefits from sentence information. Finally, we also show that a simple clustering algorithm may be employed when the topic domains are unknown and the resulting loss in accuracy is negligible.

Chowdhury, Rajdeep, Mitra, Paromita, Kumar, Sukhwant, Singh, Satyam, Singh, Aditya Narayan.  2018.  Design and Implementation of Hormonal Cycle Based Cryptographic Modus Operandi and Android Application Development for Cosseted Transmission. 2018 Second International Conference on Green Computing and Internet of Things (ICGCIoT). :32–37.

Android Applications have become an integral fraction of entwined contemporary subsistence. The entire sphere is employing diverse assortment of applications for distinguished intention. Among all the flamboyant assortment of applications, some applications have engrossed apiece individual and are unanimously accepted. With apiece fleeting instant, numerous applications are emerging in the market and are contending amid the contemporary applications in use. The proposed work is a pioneering approach to develop an application for message transference in a cosseted manner. The eminence of the work lies in ensuring that the messages send are in a coded structure, more precisely in encrypted form, formulated from the proposed Cryptographic modus operandi. The focal intention of the proposed work is to augment the status of safekeeping in data transference. The work is a multidisciplinary work and includes Biological principles in devising the Cryptographic modus operandi. Hormonal system is one of the most decisive fractions of human well-being and fundamental structure. There are numerous hormones meant for diverse purposes in human anatomy, more precisely, they are exclusively distinct for male and female. Although, the numeral quotient of hormones is colossal, but in the work, preferred male and female hormones have been employed. The hormones employed, their operational cycle and their way of illustration in the proposed work opens a unique mode to encrypt data and augment the safekeeping echelon. The augmented safekeeping could unearth its employment in numerous modes and in countless places, not only for personal purposes but could also be employed for organizational purpose. The Android Application for the said Cryptographic modus operandi is an initiative for safekeeping of apiece individual employing the Application as well as a universal mold for societal impact on the whole.

Pham, Dinh-Lam, Ahn, Hyun, Kim, Kwanghoon.  2019.  A Temporal Work Transference Event Log Trace Classification Algorithm and Its Experimental Analysis. 2019 21st International Conference on Advanced Communication Technology (ICACT). :692–696.

In the field of process mining, a lot of information about what happened inside the information system has been exploited and has yielded significant results. However, information related to the relationship between performers and performers is only utilized and evaluated in certain aspects. In this paper, we propose an algorithm to classify the temporal work transference from workflow enactment event log. This result may be used to reduce system memory, increase the computation speed. Furthermore, it can be used as one of the factors to evaluate the performer, active role of resources in the information system.

2019-01-31
Boyle, Elette, Couteau, Geoffroy, Gilboa, Niv, Ishai, Yuval.  2018.  Compressing Vector OLE. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :896–912.

Oblivious linear-function evaluation (OLE) is a secure two-party protocol allowing a receiver to learn any linear combination of a pair of field elements held by a sender. OLE serves as a common building block for secure computation of arithmetic circuits, analogously to the role of oblivious transfer (OT) for boolean circuits. A useful extension of OLE is vector OLE (VOLE), allowing the receiver to learn any linear combination of two vectors held by the sender. In several applications of OLE, one can replace a large number of instances of OLE by a smaller number of instances of VOLE. This motivates the goal of amortizing the cost of generating long instances of VOLE. We suggest a new approach for fast generation of pseudo-random instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction. This provides the first example of compressing a non-trivial and cryptographically useful correlation with good concrete efficiency. Our VOLE generators can be used to enhance the efficiency of a host of cryptographic applications. These include secure arithmetic computation and non-interactive zero-knowledge proofs with reusable preprocessing. Our VOLE generators are based on a novel combination of function secret sharing (FSS) for multi-point functions and linear codes in which decoding is intractable. Their security can be based on variants of the learning parity with noise (LPN) assumption over large fields that resist known attacks. We provide several constructions that offer tradeoffs between different efficiency measures and the underlying intractability assumptions.

Bläser, Markus, Ikenmeyer, Christian, Jindal, Gorav, Lysikov, Vladimir.  2018.  Generalized Matrix Completion and Algebraic Natural Proofs. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :1193–1206.

Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich's famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich's barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank $łeq$ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP $\subseteq$ $\exists$ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class $\exists$ BPP is known to be a subset of the more well known complexity class in the literature. Thus $\exists$ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P\#P $\subseteq$ $\exists$ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.

Kozma, László, Saranurak, Thatchaphol.  2018.  Smooth Heaps and a Dual View of Self-Adjusting Data Structures. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :801–814.

We present a new connection between self-adjusting binary search trees (BSTs) and heaps, two fundamental, extensively studied, and practically relevant families of data structures (Allen,Munro, 1978; Sleator, Tarjan, 1983; Fredman, Sedgewick, Sleator, Tarjan, 1986; Wilber, 1989; Fredman, 1999; Iacono, Özkan, 2014). Roughly speaking, we map an arbitrary heap algorithm within a broad and natural model, to a corresponding BST algorithm with the same cost on a dual sequence of operations (i.e. the same sequence with the roles of time and key-space switched). This is the first general transformation between the two families of data structures. There is a rich theory of dynamic optimality for BSTs (i.e. the theory of competitiveness between BST algorithms). The lack of an analogous theory for heaps has been noted in the literature (e.g. Pettie; 2005, 2008). Through our connection, we transfer all instance-specific lower bounds known for BSTs to a general model of heaps, initiating a theory of dynamic optimality for heaps. On the algorithmic side, we obtain a new, simple and efficient heap algorithm, which we call the smooth heap. We show the smooth heap to be the heap-counterpart of Greedy, the BST algorithm with the strongest proven and conjectured properties from the literature, conjectured to be instance-optimal (Lucas, 1988; Munro, 2000; Demaine et al., 2009). Assuming the optimality of Greedy, the smooth heap is also optimal within our model of heap algorithms. Intriguingly, the smooth heap, although derived from a non-practical BST algorithm, is simple and easy to implement (e.g. it stores no auxiliary data besides the keys and tree pointers). It can be seen as a variation on the popular pairing heap data structure, extending it with a ``power-of-two-choices'' type of heuristic. For the smooth heap we obtain instance-specific upper bounds, with applications in adaptive sorting, and we see it as a promising candidate for the long-standing question of a simpler alternative to Fibonacci heaps.

Jensen, Mads Møller, Rädle, Roman, Klokmose, Clemens N., Bodker, Susanne.  2018.  Remediating a Design Tool: Implications of Digitizing Sticky Notes. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. :224:1–224:12.

Sticky notes are ubiquitous in design processes because of their tangibility and ease of use. Yet, they have well-known limitations in professional design processes, as documentation and distribution are cumbersome at best. This paper compares the use of sticky notes in ideation with a remediated digital sticky notes setup. The paper contributes with a nuanced understanding of what happens when remediating a physical design tool into digital space, by emphasizing focus shifts and breakdowns caused by the technology, but also benefits and promises inherent in the digital media. Despite users' preference for creating physical notes, handling digital notes on boards was easier and the potential of proper documentation make the digital setup a possible alternative. While the analogy in our remediation supported a transfer of learned handling, the users' experiences across technological setups impact their use and understanding, yielding new concerns regarding cross-device transfer and collaboration.

Jia, Kaige, Liu, Zheyu, Wei, Qi, Qiao, Fei, Liu, Xinjun, Yang, Yi, Fan, Hua, Yang, Huazhong.  2018.  Calibrating Process Variation at System Level with In-Situ Low-Precision Transfer Learning for Analog Neural Network Processors. Proceedings of the 55th Annual Design Automation Conference. :12:1–12:6.

Process Variation (PV) may cause accuracy loss of the analog neural network (ANN) processors, and make it hard to be scaled down, as well as feasibility degrading. This paper first analyses the impact of PV on the performance of ANN chips. Then proposes an in-situ transfer learning method at system level to reduce PV's influence with low-precision back-propagation. Simulation results show the proposed method could increase 50% tolerance of operating point drift and 70% $\sim$ 100% tolerance of mismatch with less than 1% accuracy loss of benchmarks. It also reduces 66.7% memories and has about 50× energy-efficiency improvement of multiplication in the learning stage, compared with the conventional full-precision (32bit float) training system.