Biblio
Network attacks, especially DoS and DDoS attacks, are a significant threat for all providers of services or infrastructure. The biggest attacks can paralyze even large-scale infrastructures of worldwide companies. Attack mitigation is a complex issue studied by many researchers and security companies. While several approaches were proposed, there is still space for improvement. This paper proposes to augment existing mitigation heuristic with knowledge of reputation score of network entities. The aim is to find a way to mitigate malicious traffic present in DDoS amplification attacks with minimal disruption to communication of legitimate traffic.
Many abstract security measurements are based on characteristics of a graph that represents the network. These are typically simple and quick to compute but are often of little practical use in making real-world predictions. Practical network security is often measured using simulation or real-world exercises. These approaches better represent realistic outcomes but can be costly and time-consuming. This work aims to combine the strengths of these two approaches, developing efficient heuristics that accurately predict attack success. Hyper-heuristic machine learning techniques, trained on network attack simulation training data, are used to produce novel graph-based security metrics. These low-cost metrics serve as an approximation for simulation when measuring network security in real time. The approach is tested and verified using a simulation based on activity from an actual large enterprise network. The results demonstrate the potential of using hyper-heuristic techniques to rapidly evolve and react to emerging cybersecurity threats.
Community detection in complex networks is a fundamental problem that attracts much attention across various disciplines. Previous studies have been mostly focusing on external connections between nodes (i.e., topology structure) in the network whereas largely ignoring internal intricacies (i.e., local behavior) of each node. A pair of nodes without any interaction can still share similar internal behaviors. For example, in an enterprise information network, compromised computers controlled by the same intruder often demonstrate similar abnormal behaviors even if they do not connect with each other. In this paper, we study the problem of community detection in enterprise information networks, where large-scale internal events and external events coexist on each host. The discovered host communities, capturing behavioral affinity, can benefit many comparative analysis tasks such as host anomaly assessment. In particular, we propose a novel community detection framework to identify behavior-based host communities in enterprise information networks, purely based on large-scale heterogeneous event data. We continue proposing an efficient method for assessing host's anomaly level by leveraging the detected host communities. Experimental results on enterprise networks demonstrate the effectiveness of our model.
We propose a Calculi in process algebraic framework to formally model Intrusion Detection System (IDS) for secure routing in Mobile Ad-hoc Networks. The proposed calculi, named as dRi, is basically an extension of Distributed pi calculus (Dpi). The calculi models unicast, multicast & broadcast communication, node mobility, energy conservation at node and detection of malicious node(s) in Mobile Ad-hoc Networks. The Calculi has two syntactic categories: one for describing nodes and another for processes which reside in nodes. We also present two views of semantic reductions; one as reduction on configurations whereas another as LTSs (Labelled Transition Systems), behavioural semantics, where reduction on configurations are described on various actions. We present an example described using LTSs to show the capability of the proposed calculi. We define a bisimulation based equivalence between configurations. Further we define a touch-stone equivalence on its reduction semantics & also present prove outline for bisimulation based equivalence that can be recovered from its touch-stone equivalence and vice-versa.
To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.
In-network caching is a feature shared by all proposed Information Centric Networking (ICN) architectures as it is critical to achieving a more efficient retrieval of content. However, the default "cache everything everywhere" universal caching scheme has caused the emergence of several privacy threats. Timing attacks are one such privacy breach where attackers can probe caches and use timing analysis of data retrievals to identify if content was retrieved from the data source or from the cache, the latter case inferring that this content was requested recently. We have previously proposed a betweenness centrality based caching strategy to mitigate such attacks by increasing user anonymity. We demonstrated its efficacy in a transit-stub topology. In this paper, we further investigate the effect of betweenness centrality based caching on cache privacy and user anonymity in more general synthetic and real world Internet topologies. It was also shown that an attacker with access to multiple compromised routers can locate and track a mobile user by carrying out multiple timing analysis attacks from various parts of the network. We extend our privacy evaluation to a scenario with mobile users and show that a betweenness centrality based caching policy provides a mobile user with path privacy by increasing an attacker's difficulty in locating a moving user or identifying his/her route.
The need for security1 continues to grow in distributed computing. Today's security solutions require greater scalability and convenience in cloud-computing architectures, in addition to the ability to store and process larger volumes of data to address very sophisticated attacks. This paper explores some of the existing architectures for big data intelligence analytics, and proposes an architecture that promises to provide greater security for data intensive environments. The architecture is designed to leverage the wealth in the multi-source information for security intelligence.
Compile-time specialization and feature pruning through static binary rewriting have been proposed repeatedly as techniques for reducing the attack surface of large programs, and for minimizing the trusted computing base. We propose a new approach to attack surface reduction: dynamic binary lifting and recompilation. We present BinRec, a binary recompilation framework that lifts binaries to a compiler-level intermediate representation (IR) to allow complex transformations on the captured code. After transformation, BinRec lowers the IR back to a "recovered" binary, which is semantically equivalent to the input binary, but does have its unnecessary features removed. Unlike existing approaches, which are mostly based on static analysis and rewriting, our framework analyzes and lifts binaries dynamically. The crucial advantage is that we can not only observe the full program including all of its dependencies, but we can also determine which program features the end-user actually uses. We evaluate the correctness and performance of BinRec, and show that our approach enables aggressive pruning of unwanted features in COTS binaries.
We present improved protocols for the conversion of secret-shared bit-vectors into secret-shared integers and vice versa, for the use as subroutines in secure multiparty computation (SMC) protocols and for protocols verifying the adherence of parties to prescribed SMC protocols. The protocols are primarily designed for three-party computation with honest majority. We evaluate our protocols as part of the Sharemind three-party protocol set and see a general reduction of verification overheads, thereby increasing the practicality of covertly or actively secure Sharemind protocols.
Least Significant Bit (LSB) as one of steganography methods that already exist today is really mainstream because easy to use, but has weakness that is too easy to decode the hidden message. It is because in LSB the message embedded evenly to all pixels of an image. This paper introduce a method of steganography that combine LSB with clustering method that is Fuzzy C-Means (FCM). It is abbreviated with LSB\_FCM, then compare the stegano result with LSB method. Each image will divided into two cluster, then the biggest cluster capacity will be choosen, finally save the cluster coordinate key as place for embedded message. The key as a reference when decode the message. Each image has their own cluster capacity key. LSB\_FCM has disadvantage that is limited place to embedded message, but it also has advantages compare with LSB that is LSB\_FCM have more difficulty level when decrypted the message than LSB method, because in LSB\_FCM the messages embedded randomly in the best cluster pixel of an image, so to decrypted people must have the cluster coordinate key of the image. Evaluation result show that the MSE and PSNR value of LSB\_FCM some similiar with the pure LSB, it means that LSB\_FCM can give imperceptible image as good as the pure LSB, but have better security from the embedding place.
Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. We develop novel scoring strategies to find the most important words to modify such that the deep classifier makes a wrong prediction. Simple character-level transformations are applied to the highest-ranked words in order to minimize the edit distance of the perturbation. We evaluated DeepWordBug on two real-world text datasets: Enron spam emails and IMDB movie reviews. Our experimental results indicate that DeepWordBug can reduce the classification accuracy from 99% to 40% on Enron and from 87% to 26% on IMDB. Our results strongly demonstrate that the generated adversarial sequences from a deep-learning model can similarly evade other deep models.
With the widespread deployment of Control-Flow Integrity (CFI), control-flow hijacking attacks, and consequently code reuse attacks, are significantly more difficult. CFI limits control flow to well-known locations, severely restricting arbitrary code execution. Assessing the remaining attack surface of an application under advanced control-flow hijack defenses such as CFI and shadow stacks remains an open problem. We introduce BOPC, a mechanism to automatically assess whether an attacker can execute arbitrary code on a binary hardened with CFI/shadow stack defenses. BOPC computes exploits for a target program from payload specifications written in a Turing-complete, high-level language called SPL that abstracts away architecture and program-specific details. SPL payloads are compiled into a program trace that executes the desired behavior on top of the target binary. The input for BOPC is an SPL payload, a starting point (e.g., from a fuzzer crash) and an arbitrary memory write primitive that allows application state corruption. To map SPL payloads to a program trace, BOPC introduces Block Oriented Programming (BOP), a new code reuse technique that utilizes entire basic blocks as gadgets along valid execution paths in the program, i.e., without violating CFI or shadow stack policies. We find that the problem of mapping payloads to program traces is NP-hard, so BOPC first reduces the search space by pruning infeasible paths and then uses heuristics to guide the search to probable paths. BOPC encodes the BOP payload as a set of memory writes. We execute 13 SPL payloads applied to 10 popular applications. BOPC successfully finds payloads and complex execution traces – which would likely not have been found through manual analysis – while following the target's Control-Flow Graph under an ideal CFI policy in 81% of the cases.
The underlying or core technology of Bitcoin cryptocurrency has become a blessing for human being in this era. Everything is gradually changing to digitization in this today's epoch. Bitcoin creates virtual money using Blockchain that's become popular over the world. Blockchain is a shared public ledger, and it includes all transactions which are confirmed. It is almost impossible to crack the hidden information in the blocks of the Blockchain. However, there are certain security and technical challenges like scalability, privacy leakage, selfish mining, etc. which hampers the wide application of Blockchain. In this paper, we briefly discuss this emerging technology namely Blockchain. In addition, we extrapolate in-depth insight on Blockchain technology.
Peer to Peer (P2P) is a dynamic and self-organized technology, popularly used in File sharing applications to achieve better performance and avoids single point of failure. The popularity of this network has attracted many attackers framing different attacks including Sybil attack, Routing Table Insertion attack (RTI) and Free Riding. Many mitigation methods are also proposed to defend or reduce the impact of such attacks. However, most of those approaches are protocol specific. In this work, we propose a Blockchain based security framework for P2P network to address such security issues. which can be tailored to any P2P file-sharing system.
One of the biggest challenges for the Internet of Things (IoT) is to bridge the currently fragmented trust domains. The traditional PKI model relies on a common root of trust and does not fit well with the heterogeneous IoT ecosystem where constrained devices belong to independent administrative domains. In this work we describe a distributed trust model for the IoT that leverages the existing trust domains and bridges them to create end-to-end trust between IoT devices without relying on any common root of trust. Furthermore we define a new cryptographic primitive, denoted as obligation chain designed as a credit-based Blockchain with a built-in reputation mechanism. Its innovative design enables a wide range of use cases and business models that are simply not possible with current Blockchain-based solutions while not experiencing traditional blockchain delays. We provide a security analysis for both the obligation chain and the overall architecture and provide experimental tests that show its viability and quality.
A beneficial botnet, which tries to cope with technology of malicious botnets such as peer to peer (P2P) networking and Domain Generation Algorithm (DGA), is discussed. In order to cope with such botnets' technology, we are developing a beneficial botnet as an anti-bot measure, using our previous beneficial bot. The beneficial botnet is a group of beneficial bots. The peer to peer (P2P) communication of malicious botnet is hard to detect by a single Intrusion Detection System (IDS). Our beneficial botnet has the ability to detect P2P communication, using collaboration of our beneficial bots. The beneficial bot could detect communication of the pseudo botnet which mimics malicious botnet communication. Our beneficial botnet may also detect communication using DGA. Furthermore, our beneficial botnet has ability to cope with new technology of new botnets, because our beneficial botnet has the ability to evolve, as same as malicious botnets.
The paper presents a new technique for the botnets' detection in the corporate area networks. It is based on the usage of the algorithms of the artificial immune systems. Proposed approach is able to distinguish benign network traffic from malicious one using the clonal selection algorithm taking into account the features of the botnet's presence in the network. An approach present the main improvements of the BotGRABBER system. It is able to detect the IRC, HTTP, DNS and P2P botnets.
Botnets represent a widely deployed framework for remotely infecting and controlling hundreds of networked computing devices for malicious ends. Traditionally detection of Botnets from network data using machine learning approaches is framed as an offline, supervised learning activity. However, in practice both normal behaviours and Botnet behaviours represent non-stationary processes in which there are continuous developments to both as new services/applications and malicious behaviours appear. This work formulates the task of Botnet detection as a streaming data task in which finite label budgets, class imbalance and incremental/online learning predominate. We demonstrate that effective Botnet detection is possible for label budgets as low as 0.5% when an active learning approach is adopted for genetic programming (GP) streaming data analysis. The full article appears as S. Khanchi et al., (2018) "On Botnet Detection with Genetic Programming under Streaming Data, Label Budgets and Class Imbalance" in Swarm and Evolutionary Computation, 39:139--140. https://doi.org/10.1016/j.swevo.2017.09.008
The IRC botnet is the earliest and most significant botnet group that has a significant impact. Its characteristic is to control multiple zombies hosts through the IRC protocol and constructing command control channels. Relevant research analyzes the large amount of network traffic generated by command interaction between the botnet client and the C&C server. Packet capture traffic monitoring on the network is currently a more effective detection method, but this information does not reflect the essential characteristics of the IRC botnet. The increase in the amount of erroneous judgments has often occurred. To identify whether the botnet control server is a homogenous botnet, dynamic network communication characteristic curves are extracted. For unequal time series, dynamic time warping distance clustering is used to identify the homologous botnets by category, and in order to improve detection. Speed, experiments will use SAX to reduce the dimension of the extracted curve, reducing the time cost without reducing the accuracy.
This chapter explores the relationship between the concept of emergence, the goal of theoretical completeness, and the Principle of Sufficient Reason. Samuel Alexander and C. D. Broad argued for limits to the power of scientific explanation. Chemical explanation played a central role in their thinking. After Schrödinger’s work in the 1920s their examples seem to fall flat. However, there are more general lessons from the emergentists that need to be explored. There are cases where we know that explanation of some phenomenon is impossible. What are the implications of known limits to the explanatory power of science, and the apparent ineliminability of brute facts for emergence? One lesson drawn here is that we must embrace a methodological rather than a metaphysical conception of the Principle of Sufficient Reason.
Interactive environments are more and more entering our daily life. Our homes are becoming increasingly smart and so do our working environments. Aiming to provide assistance that is not only suitable to the current situation, but as well for the involved individuals usually comes along with an increased scale of personal data being collected/requested and processed. While this may not be exceptionally critical as long as data does not leave one's smart home, circumstances change dramatically once smart home data is processed by cloud services, and, all the more, as soon as an interactive assistance system is operated by our employer who may have interest in exploiting the data beyond its original purpose, e. g. for secretly evaluating the work performance of his personnel. In this paper we discuss how a federated identity management could be augmented with distributed usage control and trusted computing technology so as to reliably arrange and enforce privacy-related requirements in externally operated interactive environments.
This paper presents CapeVM, a sensor node virtual machine aimed at delivering both high performance and a sandboxed execution environment that ensures malicious code cannot corrupt the VM's internal state or perform actions not allowed by the VM. CapeVM uses Ahead-of-Time compilation and introduces a range of optimisations to eliminate most of the overhead present in previous work on sensor node AOT compilers. A sandboxed execution environment is guaranteed by a set of checks. The structured nature of the VM's instruction set allows the VM to perform most checks at load time, reducing the need for expensive run-time checks compared to native code approaches. While some overhead from using a VM and adding sandbox checks cannot be avoided, CapeVM's optimisations reduce this overhead dramatically. We evaluate CapeVM using a set of IoT applications and show this results in a performance just 2.1x slower than unsandboxed native code. Thus, CapeVM combines the desirable properties ofexisting work on both sandboxed execution and virtual machines for sensor nodes, with significantly improved performance.
Cloud Storage Service(CSS) provides unbounded, robust file storage capability and facilitates for pay-per-use and collaborative work to end users. But due to security issues like lack of confidentiality, malicious insiders, it has not gained wide spread acceptance to store sensitive information. Researchers have proposed proxy re-encryption schemes for secure data sharing through cloud. Due to advancement of computing technologies and advent of quantum computing algorithms, security of existing schemes can be compromised within seconds. Hence there is a need for designing security schemes which can be quantum computing resistant. In this paper, a secure file sharing scheme through cloud storage using proxy re-encryption technique has been proposed. The proposed scheme is proven to be chosen ciphertext secure(CCA) under hardness of ring-LWE, Search problem using random oracle model. The proposed scheme outperforms the existing CCA secure schemes in-terms of re-encryption time and decryption time for encrypted files which results in an efficient file sharing scheme through cloud storage.
In recent years, real-world attacks against PKI take place frequently. For example, malicious domains' certificates issued by compromised CAs are widespread, and revoked certificates are still trusted by clients. In spite of a lot of research to improve the security of SSL/TLS connections, there are still some problems unsolved. On one hand, although log-based schemes provided certificate audit service to quickly detect CAs' misbehavior, the security and data consistency of log servers are ignored. On the other hand, revoked certificates checking is neglected due to the incomplete, insecure and inefficient certificate revocation mechanisms. Further, existing revoked certificates checking schemes are centralized which would bring safety bottlenecks. In this paper, we propose a blockchain-based public and efficient audit scheme for TLS connections, which is called Certchain. Specially, we propose a dependability-rank based consensus protocol in our blockchain system and a new data structure to support certificate forward traceability. Furthermore, we present a method that utilizes dual counting bloom filter (DCBF) with eliminating false positives to achieve economic space and efficient query for certificate revocation checking. The security analysis and experimental results demonstrate that CertChain is suitable in practice with moderate overhead.
The security of web communication via the SSL/TLS protocols relies on safe distributions of public keys associated with web domains in the form of X.509 certificates. Certificate authorities (CAs) are trusted third parties that issue these certificates. However, the CA ecosystem is fragile and prone to compromises. Starting with Google's Certificate Transparency project, a number of research works have recently looked at adding transparency for better CA accountability, effectively through public logs of all certificates issued by certification authorities, to augment the current X.509 certificate validation process into SSL/TLS. In this paper, leveraging recent progress in blockchain technology, we propose a novel system, called CTB, that makes it impossible for a CA to issue a certificate for a domain without obtaining consent from the domain owner. We further make progress to equip CTB with certificate revocation mechanism. We implement CTB using IBM's Hyperledger Fabric blockchain platform. CTB's smart contract, written in Go, is provided for complete reference.