Visible to the public Biblio

Found 16998 results

2017-10-04
Sawada, Kouta, Uda, Ryuya.  2016.  Effective CAPTCHA with Amodal Completion and Aftereffects. Proceeding IMCOM '16 Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication Article No. 53 .

Accounts on web services are always exposed to the menace of attacks. Especially, a large number of accounts can be used for unfair uses such as stealth marketing or SPAM attacks. Needless to say, acquisition of those accounts and attacks are automatically done by software programs called bots. Therefore, a technology called CAPTCHA is usually used in the acquisition of accounts for web services in order to distinguish human beings from bots. The most popular kind of CAPTCHA methods is text-based CAPTCHA in which distorted alphabets and numbers appear with obstacles or noise. However, it is known that all of text-based CAPTCHA algorithms can be analyzed by computers. In addition, too much distortion or noise prevents human beings from alphabets or numbers. There are other kinds of CAPTCHA methods such as image CAPTCHA and audio CAPTCHA. However, they also have problems in use. As a related work, an effective text-based CAPTCHA algorithm was proposed to which amodal completion is applied. The CAPTCHA provides computers a large amount of calculation cost while amodal completion helps human beings to recognize characters momentarily. On the other hand, momentary recognition is uncomfortable for human beings since extreme concentration is required within ten seconds. Therefore, in this paper, we propose an improved algorithm to which amodal completion and aftereffects are applied. The aftereffects extend time for recognition of characters from a moment to several seconds.

Ghaffari, Mohsen, Parter, Merav.  2016.  A Polylogarithmic Gossip Algorithm for Plurality Consensus. Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing. :117–126.
Consider n anonymous nodes each initially supporting an opinion in \1, 2, …, k\ and suppose that they should all learn the opinion with the largest support. Per round, each node contacts a random other node and exchanges B bits with it, where typically B is at most O(log n). This basic distributed computing problem is called the plurality consensus problem (in the gossip model) and it has received extensive attention. An efficient plurality protocol is one that converges to the plurality consensus as fast as possible, and the standard assumption is that each node has memory at most polylogarithmic in n. The best known time bound is due to Becchetti et al. [SODA'15], reaching plurality consensus in O(k log n) rounds using log(k+1) bits of local memory, under some mild assumptions. As stated by Becchetti et al., achieving a poly-logarithmic time complexity remained an open question. Resolving this question, we present an algorithm that with high probability reaches plurality consensus in O(log k log n) rounds, while having message and memory size of log k + O (1) bits. This even holds under considerably more relaxed assumptions regarding the initial bias (towards plurality) compared to those of prior work. The algorithm is based on a very simple and arguably natural mechanism.
Chatzopoulos, Dimitris, Hui, Pan.  2016.  Asynchronous Reputation Systems in Device-to-device Ecosystems. Proceedings of the 8th ACM International Workshop on Hot Topics in Planet-scale mObile Computing and Online Social neTworking. :25–30.
Advances in Device-to-Device (D2D) ecosystems have brought on mobile applications that utilise nearby mobile devices in order to improve users' quality of experience (QoE). The interactions between the mobile devices have to be transparent to the end users and can be of many services – opportunistic networking, traffic offloading, computation offloading, cooperative streaming and P2P based k-anonymity location privacy service, to name a few. Whenever mobile users are willing to "ask for help" from their neighbours, they need to make non trivial decisions in order to maximise their utility. Current motivation approaches for mobile users that participate in such environments are of two types: (i) credit-based and (ii) reputation-based. These approaches rely either on centralised authorities or require prohibitively many messages or require tamper resistant security modules. In this paper we propose a trust-based approach that does not require synchronisation between the mobile users. Moreover, we present the three-way tradeoff between, consistency, message exchange and awareness and we conclude that our approach can provide first-rate data to neighbour selection mechanisms for D2D ecosystems with much less overhead.
Tu, Mengru, Chang, Yi-Kuo, Chen, Yi-Tan.  2016.  A Context-Aware Recommender System Framework for IoT Based Interactive Digital Signage in Urban Space. Proceedings of the Second International Conference on IoT in Urban Space. :39–42.
Digital Signage (DS) is one of the popular IoT technologies deployed in the urban space. DS can provide wayfinding and urban information to city dwellers and convey targeted messaging and advertising to people approaching the DS. With the rise of the online-to-offline (O2O) mobile commerce, DS also become an important marketing tool in urban retailing. However, most digital signage systems today lack interactive feature and context-aware recommendation engine. Few interactive digital signage systems available today are also insufficient in engaging anonymous viewers and also not considering temporal interaction between viewer and DS system. To overcome the above challenges, this paper proposes a context-aware recommender system framework with novel temporal interaction scheme for IoT based interactive digital signage deployed in urban space to engage anonymous viewer. The results of experiments indicate that the proposed framework improves the advertising effectiveness for DS system deployed in public in urban space.
Hayes, Jamie, Troncoso, Carmela, Danezis, George.  2016.  TASP: Towards Anonymity Sets That Persist. Proceedings of the 2016 ACM on Workshop on Privacy in the Electronic Society. :177–180.

Anonymous communication systems are vulnerable to long term passive "intersection attacks". Not all users of an anonymous communication system will be online at the same time, this leaks some information about who is talking to who. A global passive adversary observing all communications can learn the set of potential recipients of a message with more and more confidence over time. Nearly all deployed anonymous communication tools offer no protection against such attacks. In this work, we introduce TASP, a protocol used by an anonymous communication system that mitigates intersection attacks by intelligently grouping clients together into anonymity sets. We find that with a bandwidth overhead of just 8% we can dramatically extend the time necessary to perform a successful intersection attack.

Gao, Shu Juan, Jhang, Seong Tae.  2016.  Infrared Target Tracking Using Multi-Feature Joint Sparse Representation. Proceedings of the International Conference on Research in Adaptive and Convergent Systems. :40–45.
This paper proposed a novel sparse representation-based infrared target tracking method using multi-feature fusion to compensate for incomplete description of single feature. In the proposed method, we extract the intensity histogram and the data on-Local Entropy and Local Contrast Mean Difference information for feature representation. To combine various features, particle candidates and multiple feature descriptors of dictionary templates were encoded as kernel matrices. Every candidate particle was sparsely represented as a linear combination of a set of atom vectors of a dictionary. Then, the sparse target template representation model was efficiently constructed using a kernel trick method. Finally, under the framework of particle filter the weights of particles were determined by sparse coefficient reconstruction errors for tracking. For tracking, a template update strategy employing Adaptive Structural Local Sparse Appearance Tracking (ASLAS) was implemented. The experimental results on benchmark data set demonstrate the better performance over many existing ones.
Pham, Thuy Thi Thanh, Le, Thi-Lan, Dao, Trung-Kien.  2016.  Fusion of Wifi and Visual Signals for Person Tracking. Proceedings of the Seventh Symposium on Information and Communication Technology. :345–351.
Person tracking is crucial in any automatic person surveillance systems. In this problem, person localization and re-identification (Re-ID) are both simultaneously processed to show separated trajectories for each individual. In this paper, we propose to use mixture of WiFi and camera systems for person tracking in indoor surveillance regions covered by WiFi signals and disjointed camera FOVs (Field of View). A fusion method is proposed to combine the position observations achieved from each single system of WiFi or camera. The combination is done based on an optimal assignment between the position observations and predicted states from camera and WiFi systems. The correction step of Kalman filter is then applied for each tracker to give out state estimations of locations. The fusion method allows tracking by identification in non-overlapping cameras, with clear identity information taken from WiFi adapter. The experiments on a multi-model dataset show outperforming tracking results of the proposed fusion method in comparison with vision-based only method.
Bender, Michael A., Demaine, Erik D., Ebrahimi, Roozbeh, Fineman, Jeremy T., Johnson, Rob, Lincoln, Andrea, Lynch, Jayson, McCauley, Samuel.  2016.  Cache-Adaptive Analysis. Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures. :135–144.
Memory efficiency and locality have substantial impact on the performance of programs, particularly when operating on large data sets. Thus, memory- or I/O-efficient algorithms have received significant attention both in theory and practice. The widespread deployment of multicore machines, however, brings new challenges. Specifically, since the memory (RAM) is shared across multiple processes, the effective memory-size allocated to each process fluctuates over time. This paper presents techniques for designing and analyzing algorithms in a cache-adaptive setting, where the RAM available to the algorithm changes over time. These techniques make analyzing algorithms in the cache-adaptive model almost as easy as in the external memory, or DAM model. Our techniques enable us to analyze a wide variety of algorithms — Master-Method-style algorithms, Akra-Bazzi-style algorithms, collections of mutually recursive algorithms, and algorithms, such as FFT, that break problems of size N into subproblems of size Theta(Nc). We demonstrate the effectiveness of these techniques by deriving several results: 1. We give a simple recipe for determining whether common divide-and-conquer cache-oblivious algorithms are optimally cache adaptive. 2. We show how to bound an algorithm's non-optimality. We give a tight analysis showing that a class of cache-oblivious algorithms is a logarithmic factor worse than optimal. 3. We show the generality of our techniques by analyzing the cache-oblivious FFT algorithm, which is not covered by the above theorems. Nonetheless, the same general techniques can show that it is at most O(loglog N) away from optimal in the cache adaptive setting, and that this bound is tight. These general theorems give concrete results about several algorithms that could not be analyzed using earlier techniques. For example, our results apply to Fast Fourier Transform, matrix multiplication, Jacobi Multipass Filter, and cache-oblivious dynamic-programming algorithms, such as Longest Common Subsequence and Edit Distance. Our results also give algorithm designers clear guidelines for creating optimally cache-adaptive algorithms.
Donkers, Tim, Loepp, Benedikt, Ziegler, Jürgen.  2016.  Tag-Enhanced Collaborative Filtering for Increasing Transparency and Interactive Control. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. :169–173.
To increase transparency and interactive control in Recommender Systems, we extended the Matrix Factorization technique widely used in Collaborative Filtering by learning an integrated model of user-generated tags and latent factors derived from user ratings. Our approach enables users to manipulate their preference profile expressed implicitly in the (intransparent) factor space through explicitly presented tags. Furthermore, it seems helpful in cold-start situations since user preferences can be elicited via meaningful tags instead of ratings. We evaluate this approach and present a user study that to our knowledge is the most extensive empirical study of tag-enhanced recommending to date. Among other findings, we obtained promising results in terms of recommendation quality and perceived transparency, as well as regarding user experience, which we analyzed by Structural Equation Modeling.
Waitelonis, Jörg, Jürges, Henrik, Sack, Harald.  2016.  Don'T Compare Apples to Oranges: Extending GERBIL for a Fine Grained NEL Evaluation. Proceedings of the 12th International Conference on Semantic Systems. :65–72.
In recent years, named entity linking (NEL) tools were primarily developed as general approaches, whereas today numerous tools are focusing on specific domains such as e.g. the mapping of persons and organizations only, or the annotation of locations or events in microposts. However, the available benchmark datasets used for the evaluation of NEL tools do not reflect this focalizing trend. We have analyzed the evaluation process applied in the NEL benchmarking framework GERBIL [16] and its benchmark datasets. Based on these insights we extend the GERBIL framework to enable a more fine grained evaluation and in deep analysis of the used benchmark datasets according to different emphases. In this paper, we present the implementation of an adaptive filter for arbitrary entities as well as a system to automatically measure benchmark dataset properties, such as the extent of content-related ambiguity and diversity. The implementation as well as a result visualization are integrated in the publicly available GERBIL framework.
Van, Hoang Thien, Van Vu, Giang, Le, Thai Hoang.  2016.  Fingerprint Enhancement for Direct Grayscale Minutiae Extraction by Combining MFRAT and Gabor Filters. Proceedings of the Seventh Symposium on Information and Communication Technology. :360–367.
Minutiae are important features in the fingerprints matching. The effective of minutiae extraction depends greatly on the results of fingerprint enhancement. This paper proposes a novel fingerprint enhancement method for direct gray scale extracting minutiae based on combining Gabor filters with the Adaptive Modified Finite Radon Transform (AMFRAT) filters. First, the proposed method uses Gabor filters as band-pass filters for deleting the noise and clarifying ridges. Next, AMFRAT filters are applied for connecting broken ridges together, filling the created holes and clarifying linear symmetry of ridges quickly. AMFRAT is the MFRAT filter, the window size of which is adaptively adjusted according to the coherence values. The small window size is for high curvature ridge areas (small coherence value), and vice versa. As the result, the ridges are the linear symmetry areas, and more suitable for direct gray scale minutiae extraction. Finally, linear symmetry filter is only used for locating minutiae in an inverse model, as "lack of linear symmetry" occurs at minutiae points. Experimental results on FVC2004 databases DB4 (set A) shows that the proposed method is capable of improving the goodness index (GI).
Lee, Won-Jong, Hwang, Seok Joong, Shin, Youngsam, Ryu, Soojung, Ihm, Insung.  2016.  Adaptive Multi-rate Ray Sampling on Mobile Ray Tracing GPU. SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications. :3:1–3:6.
We present an adaptive multi-rate ray sampling algorithm targeting mobile ray-tracing GPUs. We efficiently combine two existing algorithms, adaptive supersampling and undersampling, into a single framework targeting ray-tracing GPUs and extend it to a new multi-rate sampling scheme by utilizing tile-based rendering and frame-to-frame coherency. The experimental results show that our implementation is a versatile solution for future ray-tracing GPUs as it provides up to 2.98 times better efficiency in terms of performance per Watt by reducing the number of rays to be fed into the dedicated hardware and minimizing the memory operations.
Sun, Shi-Feng, Gu, Dawu, Liu, Joseph K., Parampalli, Udaya, Yuen, Tsz Hon.  2016.  Efficient Construction of Completely Non-Malleable CCA Secure Public Key Encryption. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :901–906.
Non-malleability is an important and intensively studied security notion for many cryptographic primitives. In the context of public key encryption, this notion means it is infeasible for an adversary to transform an encryption of some message m into one of a related message m' under the given public key. Although it has provided a strong security property for many applications, it still does not suffice for some scenarios like the system where the users could issue keys on-the-fly. In such settings, the adversary may have the power to transform the given public key and the ciphertext. To withstand such attacks, Fischlin introduced a stronger notion, known as complete non-malleability, which requires that the non-malleability property be preserved even for the adversaries attempting to produce a ciphertext of some related message under the transformed public key. To date, many schemes satisfying this stronger security have been proposed, but they are either inefficient or proved secure in the random oracle model. In this work, we put forward a new encryption scheme in the common reference string model. Based on the standard DBDH assumption, the proposed scheme is proved completely non-malleable secure against adaptive chosen ciphertext attacks in the standard model. In our scheme, the well-formed public keys and ciphertexts could be publicly recognized without drawing support from unwieldy techniques like non-interactive zero knowledge proofs or one-time signatures, thus achieving a better performance.
Russu, Paolo, Demontis, Ambra, Biggio, Battista, Fumera, Giorgio, Roli, Fabio.  2016.  Secure Kernel Machines against Evasion Attacks. Proceeding AISec '16 Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security Pages 59-69 .

Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware learning algorithms have been developed, exploiting robust optimization and game-theoretical models to incorporate knowledge of potential adversarial data manipulations into the learning algorithm. Despite these techniques have been shown to be effective in some adversarial learning tasks, their adoption in practice is hindered by different factors, including the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. In this work, we aim to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts. In particular, leveraging recent work on robustness and regularization, we show that the security of a linear classifier can be drastically improved by selecting a proper regularizer, depending on the kind of evasion attack, as well as unbalancing the cost of classification errors. We then discuss the security of nonlinear kernel machines, and show that a proper choice of the kernel function is crucial. We also show that unbalancing the cost of classification errors and varying some kernel parameters can further improve classifier security, yielding decision functions that better enclose the legitimate data. Our results on spam and PDF malware detection corroborate our analysis.

Kishore, Ravi, Vanarasa, Chiranjeevi, Jha, Tushant, Srinathan, Kannan.  2016.  On Perfectly Secret Message Transmission in Digraphs Tolerating Dual Failures. Proceedings of the 17th International Conference on Distributed Computing and Networking. :29:1–29:10.
Consider a synchronous distributed network which is partly controlled by an adversary. In a Perfectly Secret Message Transmission(PSMT) protocol, the sender S wishes to transmit a message to the receiver R such that the adversary learns nothing about the message. We characterize the set of directed graphs that admit PSMT protocols tolerating a dual failure model where up to tp nodes are passively corrupted and further up to any tf nodes may fail.
Bringer, Julien, El Omri, Othmane, Morel, Constance, Chabanne, Hervé.  2016.  Boosting GSHADE Capabilities: New Applications and Security in Malicious Setting. Proceedings of the 21st ACM on Symposium on Access Control Models and Technologies. :203–214.

The secure two-party computation (S2PC) protocols SHADE and GSHADE have been introduced by Bringer et al. in the last two years. The protocol GSHADE permits to compute different distances (Hamming, Euclidean, Mahalanobis) quite efficiently and is one of the most efficient compared to other S2PC methods. Thus this protocol can be used to efficiently compute one-to-many identification for several biometrics data (iris, face, fingerprint). In this paper, we introduce two extensions of GSHADE. The first one enables us to evaluate new multiplicative functions. This way, we show how to apply GSHADE to a classical machine learning algorithm. The second one is a new proposal to secure GSHADE against malicious adversaries following the recent dual execution and cut-and-choose strategies. The additional cost is very small. By preserving the GSHADE's structure, our extensions are very efficient compared to other S2PC methods.

2017-10-03
Braverman, Mark, Efremenko, Klim, Gelles, Ran, Haeupler, Bernhard.  2016.  Constant-rate Coding for Multiparty Interactive Communication is Impossible. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :999–1010.

We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.

Chattopadhyay, Eshan, Goyal, Vipul, Li, Xin.  2016.  Non-malleable Extractors and Codes, with Their Many Tampered Extensions. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :285–298.

Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are seeded non-malleable extractors, introduced by Dodis and Wichs; seedless non-malleable extractors, introduced by Cheraghchi and Guruswami; and non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs. Besides being interesting on their own, they also have important applications in cryptography, e.g, privacy amplification with an active adversary, explicit non-malleable codes etc, and often have unexpected connections to their non-tampered analogues. However, the known constructions are far behind their non-tampered counterparts. Indeed, the best known seeded non-malleable extractor requires min-entropy rate at least 0.49; while explicit constructions of non-malleable two-source extractors were not known even if both sources have full min-entropy, and was left as an open problem by Cheraghchi and Guruswami. In this paper we make progress towards solving the above problems and other related generalizations. Our contributions are as follows. (1) We construct an explicit seeded non-malleable extractor for polylogarithmic min-entropy. This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result. In fact, we construct more general seeded non-malleable extractors (that can handle multiple adversaries) which were used in the recent construction of explicit two-source extractors for polylogarithmic min-entropy. (2) We construct the first explicit non-malleable two-source extractor for almost full min-entropy thus resolving the open question posed by Cheraghchi and Guruswami. (3) We motivate and initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. By using the connection found by Cheraghchi and Guruswami and providing efficient sampling algorithms, we obtain the first explicit non-malleable codes with tampering degree t, with near optimal rate and error. We call these stronger notions one-many and many-manynon-malleable codes. This provides a stronger information theoretic analogue of a primitive known as continuous non-malleable codes. Our basic technique used in all of our constructions can be seen as inspired, in part, by the techniques previously used to construct cryptographic non-malleable commitments.

Applebaum, Benny, Lovett, Shachar.  2016.  Algebraic Attacks Against Random Local Functions and Their Countermeasures. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :1087–1100.

Suppose that you have n truly random bits x=(x1,…,xn) and you wish to use them to generate m≫ n pseudorandom bits y=(y1,…, ym) using a local mapping, i.e., each yi should depend on at most d=O(1) bits of x. In the polynomial regime of m=ns, stextgreater1, the only known solution, originates from (Goldreich, ECCC 2000), is based on Random Local Functions: Compute yi by applying some fixed (public) d-ary predicate P to a random (public) tuple of distinct inputs (xi1,…,xid). Our goal in this paper is to understand, for any value of s, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate. We derive the following results: (1) We show that pseudorandomness against F2-linear adversaries (i.e., the distribution y has low-bias) is achieved if the predicate is (a) k=Ω(s)-resilience, i.e., uncorrelated with any k-subset of its inputs, and (b) has algebraic degree of Ω(s) even after fixing Ω(s) of its inputs. We also show that these requirements are necessary, and so they form a tight characterization (up to constants) of security against linear attacks. Our positive result shows that a d-local low-bias generator can have output length of nΩ(d), answering an open question of Mossel, Shpilka and Trevisan (FOCS, 2003). Our negative result shows that a candidate for pseudorandom generator proposed by the first author (computational complexity, 2015) and by O’Donnell and Witmer (CCC 2014) is insecure. We use similar techniques to refute a conjecture of Feldman, Perkins and Vempala (STOC 2015) regarding the hardness of planted constraint satisfaction problems. (2) Motivated by the cryptanalysis literature, we consider security against algebraic attacks. We provide the first theoretical treatment of such attacks by formalizing a general notion of algebraic inversion and distinguishing attacks based on the Polynomial Calculus proof system. We show that algebraic attacks succeed if and only if there exist a degree e=O(s) non-zero polynomial Q whose roots cover the roots of P or cover the roots of P’s complement. As a corollary, we obtain the first example of a predicate P for which the generated sequence y passes all linear tests but fails to pass some polynomial-time computable test, answering an open question posed by the first author (Question 4.9, computational complexity 2015).

Rizzi, Francesco, Morris, Karla, Sargsyan, Khachik, Mycek, Paul, Safta, Cosmin, Debusschere, Bert, LeMaitre, Olivier, Knio, Omar.  2016.  ULFM-MPI Implementation of a Resilient Task-Based Partial Differential Equations Preconditioner. Proceedings of the ACM Workshop on Fault-Tolerance for HPC at Extreme Scale. :19–26.

We present a task-based domain-decomposition preconditioner for partial differential equations (PDEs) resilient to silent data corruption (SDC) and hard faults. The algorithm exploits a reformulation of the PDE as a sampling problem, followed by a regression-based solution update that is resilient to SDC. We adopt a server-client model implemented using the User Level Fault Mitigation MPI (MPI-ULFM). All state information is held by the servers, while clients only serve as computational units. The task-based nature of the algorithm and the capabilities of ULFM are complemented at the algorithm level to support missing tasks, making the application resilient to hard faults affecting the clients. Weak and strong scaling tests up to \textasciitilde115k cores show an excellent performance of the application with efficiencies above 90%, demonstrating the suitability to run at large scale. We demonstrate the resilience of the application for a 2D elliptic PDE by injecting SDC using a random single bit-flip model, and hard faults in the form of clients crashing. We show that in all cases, the application converges to the right solution. We analyze the overhead caused by the faults, and show that, for the test problem considered, the overhead incurred due to SDC is minimal compared to that from the hard faults.

Das, M. Swami, Govardhan, A., Lakshmi, D. Vijaya.  2016.  Best Practices for Web Applications to Improve Performance of QoS. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :123:1–123:9.

Web Service Architecture gives a compatible and scalable structure for web service interactions with performance, responsiveness, reliability and security to make a quality of software design. Systematic quantitative approaches have been discussed for designing and developing software systems that meet performance objectives. Many companies have successfully applied these techniques in different applications to achieve better performance in terms of financial, customer satisfaction, and other benefits. This paper describes the architecture, design, implementation, integration testing, performance and maintenance of new applications. The most successful best practices used in world class organizations are discussed. This will help the application, component, and software system designers to develop web applications and fine tune the existing methods in line with the best practices. In business process automation, many standard practices and technologies have been used to model and execute business processes. The emerging technology is web applications technology which provides a great flexibility for development of interoperable environment services. In this paper we propose a Case study of Automatic Gas Booking system, a business process development strategy and best practices used in development of software components used in web applications. The classification of QWS dataset with 2507 records, service invocations, integration and security for web applications have been discussed.

Herold, Nadine, Kinkelin, Holger, Carle, Georg.  2016.  Collaborative Incident Handling Based on the Blackboard-Pattern. Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security. :25–34.

Defending computer networks from ongoing security incidents is a key requirement to ensure service continuity. Handling incidents in real-time is a complex process consisting of the three single steps: intrusion detection, alert processing and intrusion response. For useful and automated incident handling a comprehensive view on the process and tightly interleaved single steps are required. Existing solutions for incident handling merely focus on a single step leaving the other steps completely aside. Incompatible and encapsulated partial solutions are the consequence. This paper proposes an incident handling systems (IHS) based on a novel execution model that allows interleaving and collaborative interaction between the incident handling steps realized using the Blackboard Pattern. Our holistic information model lays the foundation for a conflict-free collaboration. The incident handling steps are further segmented into exchangeable functional blocks distributed across the network. To show the applicability of our approach, typical use cases for incident handling systems are identified and tested with our implementation.

Möstl, Mischa, Schlatow, Johannes, Ernst, Rolf, Hoffmann, Henry, Merchant, Arif, Shraer, Alexander.  2016.  Self-aware Systems for the Internet-of-things. Proceedings of the Eleventh IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. :21:1–21:9.

The IoT will host a large number of co-existing cyber-physical applications. Continuous change, application interference, environment dynamics and uncertainty lead to complex effects which must be controlled to give performance and application guarantees. Application and platform self-configuration and self-awareness are one paradigm to approach this challenge. They can leverage context knowledge to control platform and application functions and their interaction. They could play a dominant role in large scale cyber-physical systems and systems-of-systems, simply because no person can oversee the whole system functionality and dynamics. IoT adds a new dimension because Internet based services will increasingly be used in such system functions. Autonomous vehicles accessing cloud services for efficiency and comfort as well as to reach the required level of safety and security are an example. Such vehicle platforms will communicate with a service infrastructure that must be reliable and highly responsive. Automated continuous self-configuration of data storage might be a good basis for such services up to the point where the different self-x strategies might affect each other, in a positive or negative form. This paper contains three contributions from different domains representing the current status of self-aware systems as they will meet in the Internet-of-Things and closes with a short discussion of upcoming challenges.

Liu, Yuntao, Xie, Yang, Bao, Chongxi, Srivastava, Ankur.  2016.  An Optimization-theoretic Approach for Attacking Physical Unclonable Functions. Proceedings of the 35th International Conference on Computer-Aided Design. :45:1–45:6.

Physical unclonable functions (PUFs) utilize manufacturing ariations of circuit elements to produce unpredictable response to any challenge vector. The attack on PUF aims to predict the PUF response to all challenge vectors while only a small number of challenge-response pairs (CRPs) are known. The target PUFs in this paper include the Arbiter PUF (ArbPUF) and the Memristor Crossbar PUF (MXbarPUF). The manufacturing variations of the circuit elements in the targeted PUF can be characterized by a weight vector. An optimization-theoretic attack on the target PUFs is proposed. The feasible space for a PUF's weight vector is described by a convex polytope confined by the known CRPs. The centroid of the polytope is chosen as the estimate of the actual weight vector, while new CRPs are adaptively added into the original set of known CRPs. The linear behavior of both ArbPUF and MXbarPUF is proven which ensures that the feasible space for their weight vectors is convex. Simulation shows that our approach needs 71.4% fewer known CRPs and 86.5% less time than the state-of-the-art machine learning based approach.

Hu, Wei, Becker, Andrew, Ardeshiricham, Armita, Tai, Yu, Ienne, Paolo, Mu, Dejun, Kastner, Ryan.  2016.  Imprecise Security: Quality and Complexity Tradeoffs for Hardware Information Flow Tracking. Proceedings of the 35th International Conference on Computer-Aided Design. :95:1–95:8.

Secure hardware design is a challenging task that goes far beyond ensuring functional correctness. Important design properties such as non-interference cannot be verified on functional circuit models due to the lack of essential information (e.g., sensitivity level) for reasoning about security. Hardware information flow tracking (IFT) techniques associate data objects in the hardware design with sensitivity labels for modeling security-related behaviors. They allow the designer to test and verify security properties related to confidentiality, integrity, and logical side channels. However, precisely accounting for each bit of information flow at the hardware level can be expensive. In this work, we focus on the precision of the IFT logic. The key idea is to selectively introduce only one sided errors (false positives); these provide a conservative and safe information flow response while reducing the complexity of the security logic. We investigate the effect of logic synthesis on the quality and complexity of hardware IFT and reveal how different logic synthesis optimizations affect the amount of false positives and design overheads of IFT logic. We propose novel techniques to further simplify the IFT logic while adding no, or only a minimum number of, false positives. Additionally, we provide a solution to quantitatively introduce false positives in order to accelerate information flow security verification. Experimental results using IWLS benchmarks show that our method can reduce complexity of GLIFT by 14.47% while adding 0.20% of false positives on average. By quantitatively introducing false positives, we can achieve up to a 55.72% speedup in verification time.