Biblio
Accounts on web services are always exposed to the menace of attacks. Especially, a large number of accounts can be used for unfair uses such as stealth marketing or SPAM attacks. Needless to say, acquisition of those accounts and attacks are automatically done by software programs called bots. Therefore, a technology called CAPTCHA is usually used in the acquisition of accounts for web services in order to distinguish human beings from bots. The most popular kind of CAPTCHA methods is text-based CAPTCHA in which distorted alphabets and numbers appear with obstacles or noise. However, it is known that all of text-based CAPTCHA algorithms can be analyzed by computers. In addition, too much distortion or noise prevents human beings from alphabets or numbers. There are other kinds of CAPTCHA methods such as image CAPTCHA and audio CAPTCHA. However, they also have problems in use. As a related work, an effective text-based CAPTCHA algorithm was proposed to which amodal completion is applied. The CAPTCHA provides computers a large amount of calculation cost while amodal completion helps human beings to recognize characters momentarily. On the other hand, momentary recognition is uncomfortable for human beings since extreme concentration is required within ten seconds. Therefore, in this paper, we propose an improved algorithm to which amodal completion and aftereffects are applied. The aftereffects extend time for recognition of characters from a moment to several seconds.
Anonymous communication systems are vulnerable to long term passive "intersection attacks". Not all users of an anonymous communication system will be online at the same time, this leaks some information about who is talking to who. A global passive adversary observing all communications can learn the set of potential recipients of a message with more and more confidence over time. Nearly all deployed anonymous communication tools offer no protection against such attacks. In this work, we introduce TASP, a protocol used by an anonymous communication system that mitigates intersection attacks by intelligently grouping clients together into anonymity sets. We find that with a bandwidth overhead of just 8% we can dramatically extend the time necessary to perform a successful intersection attack.
Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware learning algorithms have been developed, exploiting robust optimization and game-theoretical models to incorporate knowledge of potential adversarial data manipulations into the learning algorithm. Despite these techniques have been shown to be effective in some adversarial learning tasks, their adoption in practice is hindered by different factors, including the difficulty of meeting specific theoretical requirements, the complexity of implementation, and scalability issues, in terms of computational time and space required during training. In this work, we aim to develop secure kernel machines against evasion attacks that are not computationally more demanding than their non-secure counterparts. In particular, leveraging recent work on robustness and regularization, we show that the security of a linear classifier can be drastically improved by selecting a proper regularizer, depending on the kind of evasion attack, as well as unbalancing the cost of classification errors. We then discuss the security of nonlinear kernel machines, and show that a proper choice of the kernel function is crucial. We also show that unbalancing the cost of classification errors and varying some kernel parameters can further improve classifier security, yielding decision functions that better enclose the legitimate data. Our results on spam and PDF malware detection corroborate our analysis.
The secure two-party computation (S2PC) protocols SHADE and GSHADE have been introduced by Bringer et al. in the last two years. The protocol GSHADE permits to compute different distances (Hamming, Euclidean, Mahalanobis) quite efficiently and is one of the most efficient compared to other S2PC methods. Thus this protocol can be used to efficiently compute one-to-many identification for several biometrics data (iris, face, fingerprint). In this paper, we introduce two extensions of GSHADE. The first one enables us to evaluate new multiplicative functions. This way, we show how to apply GSHADE to a classical machine learning algorithm. The second one is a new proposal to secure GSHADE against malicious adversaries following the recent dual execution and cut-and-choose strategies. The additional cost is very small. By preserving the GSHADE's structure, our extensions are very efficient compared to other S2PC methods.
We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.
Randomness extractors and error correcting codes are fundamental objects in computer science. Recently, there have been several natural generalizations of these objects, in the context and study of tamper resilient cryptography. These are seeded non-malleable extractors, introduced by Dodis and Wichs; seedless non-malleable extractors, introduced by Cheraghchi and Guruswami; and non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs. Besides being interesting on their own, they also have important applications in cryptography, e.g, privacy amplification with an active adversary, explicit non-malleable codes etc, and often have unexpected connections to their non-tampered analogues. However, the known constructions are far behind their non-tampered counterparts. Indeed, the best known seeded non-malleable extractor requires min-entropy rate at least 0.49; while explicit constructions of non-malleable two-source extractors were not known even if both sources have full min-entropy, and was left as an open problem by Cheraghchi and Guruswami. In this paper we make progress towards solving the above problems and other related generalizations. Our contributions are as follows. (1) We construct an explicit seeded non-malleable extractor for polylogarithmic min-entropy. This dramatically improves all previous results and gives a simpler 2-round privacy amplification protocol with optimal entropy loss, matching the best known result. In fact, we construct more general seeded non-malleable extractors (that can handle multiple adversaries) which were used in the recent construction of explicit two-source extractors for polylogarithmic min-entropy. (2) We construct the first explicit non-malleable two-source extractor for almost full min-entropy thus resolving the open question posed by Cheraghchi and Guruswami. (3) We motivate and initiate the study of two natural generalizations of seedless non-malleable extractors and non-malleable codes, where the sources or the codeword may be tampered many times. By using the connection found by Cheraghchi and Guruswami and providing efficient sampling algorithms, we obtain the first explicit non-malleable codes with tampering degree t, with near optimal rate and error. We call these stronger notions one-many and many-manynon-malleable codes. This provides a stronger information theoretic analogue of a primitive known as continuous non-malleable codes. Our basic technique used in all of our constructions can be seen as inspired, in part, by the techniques previously used to construct cryptographic non-malleable commitments.
Suppose that you have n truly random bits x=(x1,…,xn) and you wish to use them to generate m≫ n pseudorandom bits y=(y1,…, ym) using a local mapping, i.e., each yi should depend on at most d=O(1) bits of x. In the polynomial regime of m=ns, stextgreater1, the only known solution, originates from (Goldreich, ECCC 2000), is based on Random Local Functions: Compute yi by applying some fixed (public) d-ary predicate P to a random (public) tuple of distinct inputs (xi1,…,xid). Our goal in this paper is to understand, for any value of s, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate. We derive the following results: (1) We show that pseudorandomness against F2-linear adversaries (i.e., the distribution y has low-bias) is achieved if the predicate is (a) k=Ω(s)-resilience, i.e., uncorrelated with any k-subset of its inputs, and (b) has algebraic degree of Ω(s) even after fixing Ω(s) of its inputs. We also show that these requirements are necessary, and so they form a tight characterization (up to constants) of security against linear attacks. Our positive result shows that a d-local low-bias generator can have output length of nΩ(d), answering an open question of Mossel, Shpilka and Trevisan (FOCS, 2003). Our negative result shows that a candidate for pseudorandom generator proposed by the first author (computational complexity, 2015) and by O’Donnell and Witmer (CCC 2014) is insecure. We use similar techniques to refute a conjecture of Feldman, Perkins and Vempala (STOC 2015) regarding the hardness of planted constraint satisfaction problems. (2) Motivated by the cryptanalysis literature, we consider security against algebraic attacks. We provide the first theoretical treatment of such attacks by formalizing a general notion of algebraic inversion and distinguishing attacks based on the Polynomial Calculus proof system. We show that algebraic attacks succeed if and only if there exist a degree e=O(s) non-zero polynomial Q whose roots cover the roots of P or cover the roots of P’s complement. As a corollary, we obtain the first example of a predicate P for which the generated sequence y passes all linear tests but fails to pass some polynomial-time computable test, answering an open question posed by the first author (Question 4.9, computational complexity 2015).
We present a task-based domain-decomposition preconditioner for partial differential equations (PDEs) resilient to silent data corruption (SDC) and hard faults. The algorithm exploits a reformulation of the PDE as a sampling problem, followed by a regression-based solution update that is resilient to SDC. We adopt a server-client model implemented using the User Level Fault Mitigation MPI (MPI-ULFM). All state information is held by the servers, while clients only serve as computational units. The task-based nature of the algorithm and the capabilities of ULFM are complemented at the algorithm level to support missing tasks, making the application resilient to hard faults affecting the clients. Weak and strong scaling tests up to \textasciitilde115k cores show an excellent performance of the application with efficiencies above 90%, demonstrating the suitability to run at large scale. We demonstrate the resilience of the application for a 2D elliptic PDE by injecting SDC using a random single bit-flip model, and hard faults in the form of clients crashing. We show that in all cases, the application converges to the right solution. We analyze the overhead caused by the faults, and show that, for the test problem considered, the overhead incurred due to SDC is minimal compared to that from the hard faults.
Web Service Architecture gives a compatible and scalable structure for web service interactions with performance, responsiveness, reliability and security to make a quality of software design. Systematic quantitative approaches have been discussed for designing and developing software systems that meet performance objectives. Many companies have successfully applied these techniques in different applications to achieve better performance in terms of financial, customer satisfaction, and other benefits. This paper describes the architecture, design, implementation, integration testing, performance and maintenance of new applications. The most successful best practices used in world class organizations are discussed. This will help the application, component, and software system designers to develop web applications and fine tune the existing methods in line with the best practices. In business process automation, many standard practices and technologies have been used to model and execute business processes. The emerging technology is web applications technology which provides a great flexibility for development of interoperable environment services. In this paper we propose a Case study of Automatic Gas Booking system, a business process development strategy and best practices used in development of software components used in web applications. The classification of QWS dataset with 2507 records, service invocations, integration and security for web applications have been discussed.
Defending computer networks from ongoing security incidents is a key requirement to ensure service continuity. Handling incidents in real-time is a complex process consisting of the three single steps: intrusion detection, alert processing and intrusion response. For useful and automated incident handling a comprehensive view on the process and tightly interleaved single steps are required. Existing solutions for incident handling merely focus on a single step leaving the other steps completely aside. Incompatible and encapsulated partial solutions are the consequence. This paper proposes an incident handling systems (IHS) based on a novel execution model that allows interleaving and collaborative interaction between the incident handling steps realized using the Blackboard Pattern. Our holistic information model lays the foundation for a conflict-free collaboration. The incident handling steps are further segmented into exchangeable functional blocks distributed across the network. To show the applicability of our approach, typical use cases for incident handling systems are identified and tested with our implementation.
The IoT will host a large number of co-existing cyber-physical applications. Continuous change, application interference, environment dynamics and uncertainty lead to complex effects which must be controlled to give performance and application guarantees. Application and platform self-configuration and self-awareness are one paradigm to approach this challenge. They can leverage context knowledge to control platform and application functions and their interaction. They could play a dominant role in large scale cyber-physical systems and systems-of-systems, simply because no person can oversee the whole system functionality and dynamics. IoT adds a new dimension because Internet based services will increasingly be used in such system functions. Autonomous vehicles accessing cloud services for efficiency and comfort as well as to reach the required level of safety and security are an example. Such vehicle platforms will communicate with a service infrastructure that must be reliable and highly responsive. Automated continuous self-configuration of data storage might be a good basis for such services up to the point where the different self-x strategies might affect each other, in a positive or negative form. This paper contains three contributions from different domains representing the current status of self-aware systems as they will meet in the Internet-of-Things and closes with a short discussion of upcoming challenges.
Physical unclonable functions (PUFs) utilize manufacturing ariations of circuit elements to produce unpredictable response to any challenge vector. The attack on PUF aims to predict the PUF response to all challenge vectors while only a small number of challenge-response pairs (CRPs) are known. The target PUFs in this paper include the Arbiter PUF (ArbPUF) and the Memristor Crossbar PUF (MXbarPUF). The manufacturing variations of the circuit elements in the targeted PUF can be characterized by a weight vector. An optimization-theoretic attack on the target PUFs is proposed. The feasible space for a PUF's weight vector is described by a convex polytope confined by the known CRPs. The centroid of the polytope is chosen as the estimate of the actual weight vector, while new CRPs are adaptively added into the original set of known CRPs. The linear behavior of both ArbPUF and MXbarPUF is proven which ensures that the feasible space for their weight vectors is convex. Simulation shows that our approach needs 71.4% fewer known CRPs and 86.5% less time than the state-of-the-art machine learning based approach.
Secure hardware design is a challenging task that goes far beyond ensuring functional correctness. Important design properties such as non-interference cannot be verified on functional circuit models due to the lack of essential information (e.g., sensitivity level) for reasoning about security. Hardware information flow tracking (IFT) techniques associate data objects in the hardware design with sensitivity labels for modeling security-related behaviors. They allow the designer to test and verify security properties related to confidentiality, integrity, and logical side channels. However, precisely accounting for each bit of information flow at the hardware level can be expensive. In this work, we focus on the precision of the IFT logic. The key idea is to selectively introduce only one sided errors (false positives); these provide a conservative and safe information flow response while reducing the complexity of the security logic. We investigate the effect of logic synthesis on the quality and complexity of hardware IFT and reveal how different logic synthesis optimizations affect the amount of false positives and design overheads of IFT logic. We propose novel techniques to further simplify the IFT logic while adding no, or only a minimum number of, false positives. Additionally, we provide a solution to quantitatively introduce false positives in order to accelerate information flow security verification. Experimental results using IWLS benchmarks show that our method can reduce complexity of GLIFT by 14.47% while adding 0.20% of false positives on average. By quantitatively introducing false positives, we can achieve up to a 55.72% speedup in verification time.