Biblio
Content Delivery Networks(CDN) is a standout amongst the most encouraging innovations that upgrade performance for its clients' websites by diverting web demands from browsers to topographically dispersed CDN surrogate nodes. However, due to the variable nature of CDN, it suffers from various security and resource allocation issues. The most common attack which is used to bring down a whole network as well as CDN without even finding a loophole in the security is DDoS. In this proposal, we proposed a distributed virtual honeypot model for diminishing DDoS attacks and prevent intrusion in securing CDN. Honeypots are specially utilized to imitate the primary server with the goal that the attack is alleviated to the fake rather than the main server. Our proposed layer based model utilizes honeypot to be more effective reducing the cost of the system as well as maintaining the smooth delivery in geographically dispersed servers without performance degradation.
We have been investigating methods for establishing an effective, immediate defense mechanism against the DDoS attacks on Web applications via hacker botnets, in which this defense mechanism can be immediately active without preparation time, e.g. for training data, usually asked for in existing proposals. In this study, we propose a new mechanism, including new data structures and algorithms, that allow the detection and filtering of large amounts of attack packets (Web request) based on monitoring and capturing the suspect groups of source IPs that can be sending packets at similar patterns, i.e. with very high and similar frequencies. The proposed algorithm places great emphasis on reducing storage space and processing time so it is promising to be effective in real-time attack response.
With the rapid development of the Internet, preserving the security of confidential data has become a challenging issue. An effective method to this end is to apply steganography techniques. In this paper, we propose an efficient steganography algorithm which applies edge detection and MPC algorithm for data concealment in digital images. The proposed edge detection scheme partitions the given image, namely cover image, into blocks. Next, it identifies the edge blocks based on the variance of their corner pixels. Embedding the confidential data in sharp edges causes less distortion in comparison to the smooth areas. To diminish the imposed distortion by data embedding in edge blocks, we employ LSB and MPC algorithms. In the proposed scheme, the blocks are split into some groups firstly. Next, a full tree is constructed per group using the LSBs of its pixels. This tree is converted into another full tree in some rounds. The resultant tree is used to modify the considered LSBs. After the accomplishment of the data embedding process, the final image, which is called stego image, is derived. According to the experimental results, the proposed algorithm improves PSNR with at least 5.4 compared to the previous schemes.
In the era of the ever-growing number of smart devices, fraudulent practices through Phishing Websites have become an increasingly severe threat to modern computers and internet security. These websites are designed to steal the personal information from the user and spread over the internet without the knowledge of the user using the system. These websites give a false impression of genuinity to the user by mirroring the real trusted web pages which then leads to the loss of important credentials of the user. So, Detection of such fraudulent websites is an essence and the need of the hour. In this paper, various classifiers have been considered and were found that ensemble classifiers predict to utmost efficiency. The idea behind was whether a combined classifier model performs better than a single classifier model leading to a better efficiency and accuracy. In this paper, for experimentation, three Meta Classifiers, namely, AdaBoostM1, Stacking, and Bagging have been taken into consideration for performance comparison. It is found that Meta Classifier built by combining of simple classifier(s) outperform the simple classifier's performance.
Quick UDP Internet Connections (QUIC) is an experimental transport protocol designed to primarily reduce connection establishment and transport latency, as well as to improve security standards with default end-to-end encryption in HTTPbased applications. QUIC is a multiplexed and secure transport protocol fostered by Google and its design emerged from the urgent need of innovation in the transport layer, mainly due to difficulties extending TCP and deploying new protocols. While still under standardisation, a non-negligble fraction of the Internet's traffic, more than 7% of a European Tier1-ISP, is already running over QUIC and it constitutes more than 30% of Google's egress traffic [1].
Originally implemented by Google, QUIC gathers a growing interest by providing, on top of UDP, the same service as the classical TCP/TLS/HTTP/2 stack. The IETF will finalise the QUIC specification in 2019. A key feature of QUIC is that almost all its packets, including most of its headers, are fully encrypted. This prevents eavesdropping and interferences caused by middleboxes. Thanks to this feature and its clean design, QUIC is easier to extend than TCP. In this paper, we revisit the reliable transmission mechanisms that are included in QUIC. More specifically, we design, implement and evaluate Forward Erasure Correction (FEC) extensions to QUIC. These extensions are mainly intended for high-delays and lossy communications such as In-Flight Communications. Our design includes a generic FEC frame and our implementation supports the XOR, Reed-Solomon and Convolutional RLC error-correcting codes. We also conservatively avoid hindering the loss-based congestion signal by distinguishing the packets that have been received from the packets that have been recovered by the FEC. We evaluate its performance by applying an experimental design covering a wide range of delay and packet loss conditions with reproducible experiments. These confirm that our modular design allows the protocol to adapt to the network conditions. For long data transfers or when the loss rate and delay are small, the FEC overhead negatively impacts the download completion time. However, with high packet loss rates and long delays or smaller files, FEC allows drastically reducing the download completion time by avoiding costly retransmission timeouts. These results show that there is a need to use FEC adaptively to the network conditions.
With the increasing interest in studying Automated Driving System (ADS)-equipped vehicles through simulation, there is a growing need for comprehensive and agile middleware to provide novel Virtual Analysis (VA) functions of ADS-equipped vehicles towards enabling a reliable representation for pre-deployment test. The National Institute of Standards and Technology (NIST) Universal Cyber-physical systems Environment for Federation (UCEF) is such a VA environment. It provides Application Programming Interfaces (APIs) capable of ensuring synchronized interactions across multiple simulation platforms such as LabVIEW, OMNeT++, Ricardo IGNITE, and Internet of Things (IoT) platforms. UCEF can aid engineers and researchers in understanding the impact of different constraints associated with complex cyber-physical systems (CPS). In this work UCEF is used to produce a simulated Operational Domain Design (ODD) for ADS-equipped vehicles where control (drive cycle/speed pattern), sensing (obstacle detection, traffic signs and lights), and threats (unusual signals, hacked sources) are represented as UCEF federates to simulate a drive cycle and to feed it to vehicle dynamics simulators (e.g. OpenModelica or Ricardo IGNITE) through the Functional Mock-up Interface (FMI). In this way we can subject the vehicle to a wide range of scenarios, collect data on the resulting interactions, and analyze those interactions using metrics to understand trustworthiness impact. Trustworthiness is defined here as in the NIST Framework for Cyber-Physical Systems, and is comprised of system reliability, resiliency, safety, security, and privacy. The goal of this work is to provide an example of an experimental design strategy using Fractional Factorial Design for statistically assessing the most important safety metrics in ADS-equipped vehicles.
Named data network (NDN) is one of the most promising information-centric networking architectures, where the core concept is to focus on the named data (or contents) themselves. Users in NDN can easily send a request packet to get the desired content regardless of its address. The routers in NDN have cache functionality to make the users instantly retrieve the desired file. Thus, the user can immediately get the desired file from the nearby nodes instead of the remote host. Nevertheless, NDN is a novel proposal and there are still some open issues to be resolved. In view of previous research, it is a challenge to achieve access control on a specific user and support potential receivers simultaneously. In order to solve it, we present a fine-grained access control mechanism tailored for NDN, supporting data confidentiality, potential receivers, and mobility. Compared to previous works, this is the first to support fine-grained access control and potential receivers. Furthermore, the proposed scheme achieves provable security under the DBDH assumption.
NDN has been widely regarded as a promising representation and implementation of information- centric networking (ICN) and serves as a potential candidate for the future Internet architecture. However, the security of NDN is threatened by a significant safety hazard known as an IFA, which is an evolution of DoS and distributed DoS attacks on IP-based networks. The IFA attackers can create numerous malicious interest packets into a named data network to quickly exhaust the bandwidth of communication channels and cache capacity of NDN routers, thereby seriously affecting the routers' ability to receive and forward packets for normal users. Accurate detection of the IFAs is the most critical issue in the design of a countermeasure. To the best of our knowledge, the existing IFA countermeasures still have limitations in terms of detection accuracy, especially for rapidly volatile attacks. This article proposes a TC to detect the distributions of normal and malicious interest packets in the NDN routers to further identify the IFA. The trace back method is used to prevent further attempts. The simulation results show the efficiency of the TC for mitigating the IFAs and its advantages over other typical IFA countermeasures.
Named Data Network (NDN) is an alternative to host-centric networking exemplified by today's Internet. One key feature of NDN is in-network caching that reduces access delay and query overhead by caching popular contents at the source as well as at a few other nodes. Unfortunately, in-network caching suffers various privacy risks by different attacks, one of which is termed timing attack. This is an attack to infer whether a consumer has recently requested certain contents based on the time difference between the delivery time of those contents that are currently cached and those that are not cached. In order to prevent the privacy leakage and resist such kind of attacks, we propose a detection scheme by adopting Long Short-term Memory (LSTM) model. Based on the four input features of LSTM, cache hit ratio, average request interval, request frequency, and types of requested contents, we timely capture more important eigenvalues by dividing a constant time window size into a few small slices in order to detect timing attacks accurately. We have performed extensive simulations to compare our scheme with several other state-of-the-art schemes in classification accuracy, detection ratio, false alarm ratio, and F-measure. It has been shown that our scheme possesses a better performance in all cases studied.