Biblio
Trust in SSL-based communication on the Internet is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) making sure that it is not revoked. Currently, Certificate Revocation checks (i.e. step (iii) above) are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, both current approaches tend to incur such a high overhead that several browsers (including almost all mobile ones) choose not to check certificate revocation status, thereby exposing their users to significant security risks. To address this issue, we propose DCSP: a new low-latency approach that provides up-to-date and accurate certificate revocation information. DCSP capitalizes on the existing scalable and high-performance infrastructure of DNS. DCSP minimizes end user latency while, at the same time, requiring only a small number of cryptographic signatures by the CAs. Our design and initial performance results show that DCSP has the potential to perform an order of magnitude faster than the current state-of-the-art alternatives.
Privilege Escalation is a common and serious type of security attack. Although experience shows that many applications are vulnerable to such attacks, attackers rarely succeed upon first trial. Their initial probing attempts often fail before a successful breach of access control is achieved. This paper presents an approach to automatically instrument application source code to report events of failed access attempts that may indicate privilege escalation attacks to a run time application protection mechanism. The focus of this paper is primarily on the problem of instrumenting web application source code to detect access control attack events. We evaluated false positives and negatives of our approach using two open source web applications.
This paper describes an approach for detecting the presence of domain name system (DNS) tunnels in network traffic. DNS tunneling is a common technique hackers use to establish command and control nodes and to exfiltrate data from networks. To generate the training data sufficient to build models to detect DNS tunneling activity, a penetration testing effort was employed. We extracted features from this data and trained random forest classifiers to distinguish normal DNS activity from tunneling activity. The classifiers successfully detected the presence of tunnels we trained on, and four other types of tunnels that were not a part of the training set.
Attacks against websites are increasing rapidly with the expansion of web services. An increasing number of diversified web services make it difficult to prevent such attacks due to many known vulnerabilities in websites. To overcome this problem, it is necessary to collect the most recent attacks using decoy web honeypots and to implement countermeasures against malicious threats. Web honeypots collect not only malicious accesses by attackers but also benign accesses such as those by web search crawlers. Thus, it is essential to develop a means of automatically identifying malicious accesses from mixed collected data including both malicious and benign accesses. Specifically, detecting vulnerability scanning, which is a preliminary process, is important for preventing attacks. In this study, we focused on classification of accesses for web crawling and vulnerability scanning since these accesses are too similar to be identified. We propose a feature vector including features of collective accesses, e.g., intervals of request arrivals and the dispersion of source port numbers, obtained with multiple honeypots deployed in different networks for classification. Through evaluation using data collected from 37 honeypots in a real network, we show that features of collective accesses are advantageous for vulnerability scanning and crawler classification.
We present D-ForenRIA, a distributed forensic tool to automatically reconstruct user-sessions in Rich Internet Applications (RIAs), using solely the full HTTP traces of the sessions as input. D-ForenRIA recovers automatically each browser state, reconstructs the DOMs and re-creates screenshots of what was displayed to the user. The tool also recovers every action taken by the user on each state, including the user-input data. Our application domain is security forensics, where sometimes months-old sessions must be quickly reconstructed for immediate inspection. We will demonstrate our tool on a series of RIAs, including a vulnerable banking application created by IBM Security for testing purposes. In that case study, the attacker visits the vulnerable web site, and exploits several vulnerabilities (SQL-injections, XSS...) to gain access to private information and to perform unauthorized transactions. D-ForenRIA can reconstruct the session, including screenshots of all pages seen by the hacker, DOM of each page and the steps taken for unauthorized login and the inputs hacker exploited for the SQL-injection attack. D-ForenRIA is made efficient by applying advanced reconstruction techniques and by using several browsers concurrently to speed up the reconstruction process. Although we developed D-ForenRIA in the context of security forensics, the tool can also be useful in other contexts such as aided RIAs debugging and automated RIAs scanning.
There are two broad approaches for differentially private data analysis. The interactive approach aims at developing customized differentially private algorithms for various data mining tasks. The non-interactive approach aims at developing differentially private algorithms that can output a synopsis of the input dataset, which can then be used to support various data mining tasks. In this paper we study the effectiveness of the two approaches on differentially private k-means clustering. We develop techniques to analyze the empirical error behaviors of the existing interactive and non-interactive approaches. Based on the analysis, we propose an improvement of DPLloyd which is a differentially private version of the Lloyd algorithm. We also propose a non-interactive approach EUGkM which publishes a differentially private synopsis for k-means clustering. Results from extensive and systematic experiments support our analysis and demonstrate the effectiveness of our improvement on DPLloyd and the proposed EUGkM algorithm.
IEEE 802.15.4 is widely used as lower layers for not only wellknown wireless communication standards such as ZigBee, 6LoWPAN, and WirelessHART, but also customized protocols developed by manufacturers, particularly for various Internet of Things (IoT) devices. Customized protocols are not usually publicly disclosed nor standardized. Moreover, unlike textual protocols (e.g., HTTP, SMTP, POP3.), customized protocols for IoT devices provide no clues such as strings or keywords that are useful for analysis. Instead, they use bits or bytes to represent header and body information in order to save power and bandwidth. On the other hand, they often do not employ encryption, fragmentation, or authentication to save cost and effort in implementations. In other words, their security relies only on the confidentiality of the protocol itself. In this paper, we introduce a novel methodology to analyze and reconstruct unknown wireless customized protocols over IEEE 802.15.4. Based on this methodology, we develop an automatic analysis and spoofing tool called WPAN automatic spoofer (WASp) that can be used to understand and reconstruct customized protocols to byte-level accuracy, and to generate packets that can be used for verification of analysis results or spoofing attacks. The methodology consists of four phases: packet collection, packet grouping, protocol analysis, and packet generation. Except for the packet collection step, all steps are fully automated. Although the use of customized protocols is also unknown before the collecting phase, we choose two real-world target systems for evaluation: the smart plug system and platform screen door (PSD) to evaluate our methodology and WASp. In the evaluation, 7,299 and 217 packets are used as datasets for both target systems, respectively. As a result, on average, WASp is found to reduce entropy of legitimate message space by 93.77% and 88.11% for customized protocols used in smart plug and PSD systems, respectively. In addition, on average, 48.19% of automatically generated packets are successfully spoofed for the first target systems.
This paper proposes a novel distributed event-triggered algorithmic solution to the multi-agent average consensus problem for networks whose communication topology is described by weight-balanced, strongly connected digraphs. The proposed event-triggered communication and control strategy does not rely on individual agents having continuous or periodic access to information about the state of their neighbors. In addition, it does not require the agents to have a priori knowledge of any global parameter to execute the algorithm. We show that, under the proposed law, events cannot be triggered an infinite number of times in any finite period (i.e., no Zeno behavior), and that the resulting network executions provably converge to the average of the initial agents' states exponentially fast. We also provide weaker conditions on connectivity under which convergence is guaranteed when the communication topology is switching. Finally, we also propose and analyze a periodic implementation of our algorithm where the relevant triggering functions do not need to be evaluated continuously. Simulations illustrate our results and provide comparisons with other existing algorithms.
Information retrieval methods, especially considering multimedia data, have evolved towards the integration of multiple sources of evidence in the analysis of the relevance of the items considering a given user search task. In this context, for attenuating the semantic gap between low-level features extracted from the content of the digital objects and high-level semantic concepts (objects, categories, etc.) and making the systems adaptive to different user needs, interactive models have brought the user closer to the retrieval loop allowing user-system interaction mainly through implicit or explicit relevance feedback. Analogously, diversity promotion has emerged as an alternative for tackling ambiguous or underspecified queries. Additionally, several works have addressed the issue of minimizing the required user effort on providing relevance assessments while keeping an acceptable overall effectiveness This thesis discusses, proposes, and experimentally analyzes multimodal and interactive diversity-oriented information retrieval methods. This work, comprehensively covers the interactive information retrieval literature and also discusses about recent advances, the great research challenges, and promising research opportunities. We have proposed and evaluated two relevancediversity trade-off enhancement work-flows, which integrate multiple information from images, such as: visual features, textual metadata, geographic information, and user credibility descriptors. In turn, as an integration of interactive retrieval and diversity promotion techniques, for maximizing the coverage of multiple query interpretations/aspects and speeding up the information transfer between the user and the system, we have proposed and evaluated a multimodal online learning-to-rank method trained with relevance feedback over diversified results Our experimental analysis shows that the joint usage of multiple information sources positively impacted the relevance-diversity balancing algorithms. Our results also suggest that the integration of multimodal-relevance-based filtering and reranking is effective on improving result relevance and also boosts diversity promotion methods. Beyond it, with a thorough experimental analysis we have investigated several research questions related to the possibility of improving result diversity and keeping or even improving relevance in interactive search sessions. Moreover, we analyze how much the diversification effort affects overall search session results and how different diversification approaches behave for the different data modalities. By analyzing the overall and per feedback iteration effectiveness, we show that introducing diversity may harm initial results whereas it significantly enhances the overall session effectiveness not only considering the relevance and diversity, but also how early the user is exposed to the same amount of relevant items and diversity
Best Poster Award, Illinois Institute of Technology Research Day, April 11, 2016.
The wide use of cloud computing and of data outsourcing rises important concerns with regards to data security resulting thus in the necessity of protection mechanisms such as encryption of sensitive data. The recent major theoretical breakthrough of finding the Holy Grail of encryption, i.e. fully homomorphic encryption guarantees the privacy of queries and their results on encrypted data. However, there are only a few studies proposing a practical performance evaluation of the use of homomorphic encryption schemes in order to perform database queries. In this paper, we propose and analyse in the context of a secure framework for a generic database query interpreter two different methods in which client requests are dynamically executed on homomorphically encrypted data. Dynamic compilation of the requests allows to take advantage of the different optimizations performed during an off-line step on an intermediate code representation, taking the form of boolean circuits, and, moreover, to specialize the execution using runtime information. Also, for the returned encrypted results, we assess the complexity and the efficiency of the different protocols proposed in the literature in terms of overall execution time, accuracy and communication overhead.
Proxy Mobile IPv6 (PMIPv6) is an IP mobility protocol. In a PMIPv6 domain, local mobility anchor is involved in control as well as data communication. To ease the load on a mobility anchor and avoid single point of failure, the PMIPv6 standard provides the opportunity of having multiple mobility anchors. In this paper, we propose a Software Defined Networking (SDN) based solution to provide load balancing among mobility anchors, in a SDN based PMIPv6 domain. In the proposed solution, a mobility controller performs acts as a central control entity, and performs load monitoring on the mobility anchors. On detecting the load crossing over a threshold for a certain mobility anchor, the controller moves some traffic from highly loaded mobility anchor to relatively less loaded mobility anchor. Analytical model and primitive performance evaluation of the proposed solution is presented in this paper, which demonstrates 5% and 40% improvement in uplink and downlink traffic disruption periods, respectively
While event logs generated by business processes play an increasingly significant role in business analysis, the quality of data remains a serious problem. Automatic recovery of dirty event logs is desirable and thus receives more attention. However, existing methods only focus on missing event recovery, or fall short of efficiency. To this end, we present Effa, a ProM plugin, to automatically recover event logs in the light of process specifications. Based on advanced heuristics including process decomposition and trace replaying to search the minimum recovery, Effa achieves a balance between repairing accuracy and efficiency.
Wireless sensor networks (WSNs) are playing a vital role in collecting data about a natural or built environment. WSNs have attractive advantages such as low-cost, low maintains and flexible arrangements for applications. Wireless sensor network has been used for many different applications such as military implementations in a battlefield, an environmental monitoring, and multifunction in health sector. In order to ensure its functionality, especially in malicious environments, security mechanisms become essential. Especially internal attacks have gained prominence and pose most challenging threats to all WSNs. Although, a number of works have been done to discuss a WSN under the internal attacks it has gained little attention. For example, the conventional cryptographic technique does not give the appropriated security to save the network from internal attack that causes by abnormally behaviour at the legitimate nodes in a network. In this paper, we propose an effective algorithm to make an evaluation for detecting internal attack by multi-criteria in real time. This protecting is based on the combination of the multiple pieces of evidences collected from the nodes under an internal attacker in a network. A theory of the decision is carefully discussed based on the Dempster-Shafer Theory (DST). If you really wanted to make sure the designed network works exactly works as you expected, you will be benefited from this algorithm. The advantage of this proposed method is not just its performance in real-time but also it is effective as it does not need the knowledge about the normal or malicious node in advance with very high average accuracy that is close to 100%. It also can be used as one of maintaining tools for the regulations of the deployed WSNs.
Using the sparse feature of the signal, compressed sensing theory can take a sample to compress data at a rate lower than the Nyquist sampling rate. The signal must be represented by the sparse matrix, however. Based on the above theory, this article puts forward a sparse degree of adaptive algorithms which can be used for the detection and reconstruction of the underwater target radiation signal. The received underwater target radiation signal, at first, transits the noise energy into signal energy under test by the stochastic resonance system, and then based on Gerschgorin disk criterion, it can make out the number of underwater target radiation signals in order to determine the optimal sparse degree of compressed sensing, and finally, the detection and reconstruction of the original signal can be realized by utilizing the compressed sensing technique. The simulation results show that this method can effectively detect underwater target radiation signals, and they can also be detected quite well under low signal-to-noise ratio(SNR).
In the universal Android system, each application runs in its own sandbox, and the permission mechanism is used to enforce access control to the system APIs and applications. However, permission leak could happen when an application without certain permission illegally gain access to protected resources through other privileged applications. In order to address permission leak in a trusted execution environment, this paper designs security architecture which contains sandbox module, middleware module, usage and access control module, and proposes an effective usage and access control scheme that can prevent permission leak in a trusted execution environment. Security architecture based on the scheme has been implemented on an ARM-Android platform, and the evaluation of the proposed scheme demonstrates its effectiveness in mitigating permission leak vulnerabilities.
In recent years, simple password-based authentication systems have increasingly proven ineffective for many classes of real-world devices. As a result, many researchers have concentrated their efforts on the design of new biometric authentication systems. This trend has been further accelerated by the advent of mobile devices, which offer numerous sensors and capabilities to implement a variety of mobile biometric authentication systems. Along with the advances in biometric authentication, however, attacks have also become much more sophisticated and many biometric techniques have ultimately proven inadequate in face of advanced attackers in practice. In this paper, we investigate the effectiveness of sensor-enhanced keystroke dynamics, a recent mobile biometric authentication mechanism that combines a particularly rich set of features. In our analysis, we consider different types of attacks, with a focus on advanced attacks that draw from general population statistics. Such attacks have already been proven effective in drastically reducing the accuracy of many state-of-the-art biometric authentication systems. We implemented a statistical attack against sensor-enhanced keystroke dynamics and evaluated its impact on detection accuracy. On one hand, our results show that sensor-enhanced keystroke dynamics are generally robust against statistical attacks with a marginal equal-error rate impact (textless0.14%). On the other hand, our results show that, surprisingly, keystroke timing features non-trivially weaken the security guarantees provided by sensor features alone. Our findings suggest that sensor dynamics may be a stronger biometric authentication mechanism against recently proposed practical attacks.
With the growth of cloud computing, database outsourcing has attracted much interests. Due to the serious privacy threats in cloud computing, databases needs to be encrypted before being outsourced to the cloud. Therefore, various Top-k query processing algorithms have been studied for encrypted databases. However, existing algorithms are either insecure or inefficient. Therefore, in this paper we propose an efficient and secure Top-k query processing algorithm. Our algorithm guarantees the confidentiality of both the data and a user query while hiding data access patterns. Our algorithm also enables the query issuer not to participate in the query processing. To achieve a high level of query processing efficiency, we use new secure protocols using Yao's garbled circuit and a data packing technique. A performance analysis shows that the proposed algorithm outperforms the existing works in terms of query processing costs.