Biblio
Information-leakage is one of the most important security issues in the current Internet. In Named-Data Networking (NDN), Interest names introduce novel vulnerabilities that can be exploited. By setting up a malware, Interest names can be used to encode critical information (steganography embedded) and to leak information out of the network by generating anomalous Interest traffic. This security threat based on Interest names does not exist in IP network, and it is essential to solve this issue to secure the NDN architecture. This paper performs risk analysis of information-leakage in NDN. We first describe vulnerabilities with Interest names and, as countermeasures, we propose a name-based filter using search engine information, and another filter using one-class Support Vector Machine (SVM). We collected URLs from the data repository provided by Common Crawl and we evaluate the performances of our per-packet filters. We show that our filters can choke drastically the throughput of information-leakage, which makes it easier to detect anomalous Interest traffic. It is therefore possible to mitigate information-leakage in NDN network and it is a strong incentive for future deployment of this architecture at the Internet scale.
Named Data Networking (NDN) is a new network architecture design that led to the evolution of a network architecture based on data-centric. Questions have been raised about how to compare its performance with the old architecture such as IP network which is generally based on Internet Protocol version 4 (IPv4). Differs with the old one, source and destination addresses in the delivery of data are not required on the NDN network because the addresses function is replaced by a data name (Name) which serves to identify the data uniquely. In a computer network, a network routing is an essential factor to support data communication. The network routing on IP network relies only on Routing Information Base (RIB) derived from the IP table on the router. So that, if there is a problem on the network such as there is one node exposed to a dangerous attack, the IP router should wait until the IP table is updated, and then the routing channel is changed. The issue of how to change the routing path without updating IP table has received considerable critical attention. The NDN network has an advantage such as its capability to execute an adaptive forwarding mechanism, which FIB (Forwarding Information Base) of the NDN router keeps information for routing and forwarding planes. Therefore, if there is a problem on the network, the NDN router can detect the problem more quickly than the IP router. The contribution of this study is important to explain the benefit of the forwarding mechanism of the NDN network compared to the IP network forwarding mechanism when there is a node which is suffered a hijack attack.
As one of the next generation network architectures, Named Data Networking(NDN) which features location-independent addressing and content caching makes it more suitable to be deployed into Vehicular Ad-hoc Network(VANET). However, a new attack pattern is found when NDN and VANET combine. This new attack is Interest Packet Popple Broadcast Diffusion Attack (PBDA). There is no mitigation strategies to mitigate PBDA. In this paper a mitigation strategies called RVMS based on node reputation value (RV) is proposed to detect malicious nodes. The node calculates the neighbor node RV by direct and indirect RV evaluation and uses Markov chain predict the current RV state of the neighbor node according to its historical RV. The RV state is used to decide whether to discard the interest packet. Finally, the effectiveness of the RVMS is verified through modeling and experiment. The experimental results show that the RVMS can mitigate PBDA.
Named Data Networking (NDN) is a future Internet architecture, NDN forwarding strategy is a hot research topic in MANET. At present, there are two categories of forwarding strategies in NDN. One is the blind forwarding(BF), the other is the aware forwarding(AF). Data packet return by the way that one came forwarding strategy(DRF) as one of the BF strategy may fail for the interruptions of the path that are caused by the mobility of nodes. Consumer need to wait until the interest packet times out to request the data packet again. To solve the insufficient of DRF, in this paper a Forwarding Strategy, called FN based on Neighbor-aware is proposed for NDN MANET. The node maintains the neighbor information and the request information of neighbor nodes. In the phase of data packet response, in order to improve request satisfaction rate, node specifies the next hop node; Meanwhile, in order to reduce packet loss rate, node assists the last hop node to forward packet to the specific node. The simulation results show that compared with DRF and greedy forwarding(GF) strategy, FN can improve request satisfaction rate when node density is high.
Top-level domains play an important role in domain name system. Close attention should be paid to security of top level domains. In this paper, we found many configuration anomalies of top-level domains by analyzing their resource records. We got resource records of top-level domains from root name servers and authoritative servers of top-level domains. By comparing these resource records, we observed the anomalies in top-level domains. For example, there are 8 servers shared by more than one hundred top-level domains; Some TTL fields or SERIAL fields of resource records obtained on each NS servers of the same top-level domain were inconsistent; some authoritative servers of top-level domains were unreachable. Those anomalies may affect the availability of top-level domains. We hope that these anomalies can draw top-level domain administrators' attention to security of top-level domains.
Named Data Networking (NDN) is a content-oriented future Internet architecture, which well suits the increasingly mobile and information-intensive applications that dominate today's Internet. NDN relies on in-network caching to facilitate content delivery. This makes it challenging to enforce access control since the content has been cached in the routers and the content producer has lost the control over it. Due to its salient advantages in content delivery, network coding has been introduced into NDN to improve content delivery effectiveness. In this paper, we design ACNC, the first Access Control solution specifically for Network Coding-based NDN. By combining a novel linear AONT (All Or Nothing Transform) and encryption, we can ensure that only the legitimate user who possesses the authorization key can successfully recover the encoding matrix for network coding, and hence can recover the content being transmitted. In addition, our design has two salient merits: 1) the linear AONT well suits the linear nature of network coding; 2) only one vector of the encoding matrix needs to be encrypted/decrypted, which only incurs small computational overhead. Security analysis and experimental evaluation in ndnSIM show that our design can successfully enforce access control on network coding-based NDN with an acceptable overhead.
The Internet of Things (IoT) increasingly demonstrates its role in smart services, such as smart home, smart grid, smart transportation, etc. However, due to lack of standards among different vendors, existing networked IoT devices (NoTs) can hardly provide enough security. Moreover, it is impractical to apply advanced cryptographic solutions to many NoTs due to limited computing capability and power supply. Inspired by recent advances in IoT demand, in this paper, we develop an IoT security architecture that can protect NoTs in different IoT scenarios. Specifically, the security architecture consists of an auditing module and two network-level security controllers. The auditing module is designed to have a stand-alone intrusion detection system for threat detection in a NoT network cluster. The two network-level security controllers are designed to provide security services from either network resource management or cryptographic schemes regardless of the NoT security capability. We also demonstrate the proposed IoT security architecture with a network based one-hop confidentiality scheme and a cryptography-based secure link mechanism.
Currently, no major browser fully checks for TLS/SSL certificate revocations. This is largely due to the fact that the deployed mechanisms for disseminating revocations (CRLs, OCSP, OCSP Stapling, CRLSet, and OneCRL) are each either incomplete, insecure, inefficient, slow to update, not private, or some combination thereof. In this paper, we present CRLite, an efficient and easily-deployable system for proactively pushing all TLS certificate revocations to browsers. CRLite servers aggregate revocation information for all known, valid TLS certificates on the web, and store them in a space-efficient filter cascade data structure. Browsers periodically download and use this data to check for revocations of observed certificates in real-time. CRLite does not require any additional trust beyond the existing PKI, and it allows clients to adopt a fail-closed security posture even in the face of network errors or attacks that make revocation information temporarily unavailable. We present a prototype of name that processes TLS certificates gathered by Rapid7, the University of Michigan, and Google's Certificate Transparency on the server-side, with a Firefox extension on the client-side. Comparing CRLite to an idealized browser that performs correct CRL/OCSP checking, we show that CRLite reduces latency and eliminates privacy concerns. Moreover, CRLite has low bandwidth costs: it can represent all certificates with an initial download of 10 MB (less than 1 byte per revocation) followed by daily updates of 580 KB on average. Taken together, our results demonstrate that complete TLS/SSL revocation checking is within reach for all clients.
Trust in SSL-based communications is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers, may trick web browsers to trust a revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers controlled by cyber-attackers. Although frequently updated, nonced, and timestamped certificates may reduce the frequency and impact of such cyber-attacks, they impose a very large overhead to the CAs and OCSP servers, which now need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this overhead and provide a solution to the described cyber-attacks, we present CCSP: a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called signed collections. In this paper, we present the design, preliminary implementation, and evaluation of CCSP in general, and signed collections in particular. Our preliminary results suggest that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by 6 orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.
Cryptography and encryption is a topic that is blurred by its complexity making it difficult for the majority of the public to easily grasp. The focus of our research is based on SSL technology involving CAs, a centralized system that manages and issues certificates to web servers and computers for validation of identity. We first explain how the certificate provides a secure connection creating a trust between two parties looking to communicate with one another over the internet. Then the paper goes into what happens when trust is compromised and how information that is being transmitted could possibly go into the hands of the wrong person. We are proposing a browser plugin, Certificate Authority Rescue Engine (CAre), to serve as an added source of security with simplicity and visibility. In order to see why CAre will be an added benefit to average and technical users of the internet, one must understand what website security entails. Therefore, this paper will dive deep into website security through the use of public key infrastructure and its core components; certificates, certificate authorities, and their relationship with web browsers.
Testing and fixing Web Application Firewalls (WAFs) are two relevant and complementary challenges for security analysts. Automated testing helps to cost-effectively detect vulnerabilities in a WAF by generating effective test cases, i.e., attacks. Once vulnerabilities have been identified, the WAF needs to be fixed by augmenting its rule set to filter attacks without blocking legitimate requests. However, existing research suggests that rule sets are very difficult to understand and too complex to be manually fixed. In this paper, we formalise the problem of fixing vulnerable WAFs as a combinatorial optimisation problem. To solve it, we propose an automated approach that combines machine learning with multi-objective genetic algorithms. Given a set of legitimate requests and bypassing SQL injection attacks, our approach automatically infers regular expressions that, when added to the WAF's rule set, prevent many attacks while letting legitimate requests go through. Our empirical evaluation based on both open-source and proprietary WAFs shows that the generated filter rules are effective at blocking previously identified and successful SQL injection attacks (recall between 54.6% and 98.3%), while triggering in most cases no or few false positives (false positive rate between 0% and 2%).
SQL injection attack (SQLIA) pose a serious security threat to the database driven web applications. This kind of attack gives attackers easily access to the application's underlying database and to the potentially sensitive information these databases contain. A hacker through specifically designed input, can access content of the database that cannot otherwise be able to do so. This is usually done by altering SQL statements that are used within web applications. Due to importance of security of web applications, researchers have studied SQLIA detection and prevention extensively and have developed various methods. In this research, after reviewing the existing research in this field, we present a new hybrid method to reduce the vulnerability of the web applications. Our method is specifically designed to detect and prevent SQLIA. Our proposed method is consists of three phases namely, the database design, implementation, and at the common gateway interface (CGI). Details of our approach along with its pros and cons are discussed in detail.
Web-Based applications are becoming more increasingly technically complex and sophisticated. The very nature of their feature-rich design and their capability to collate, process, and disseminate information over the Internet or from within an intranet makes them a popular target for attack. According to Open Web Application Security Project (OWASP) Top Ten Cheat sheet-2017, SQL Injection Attack is at peak among online attacks. This can be attributed primarily to lack of awareness on software security. Developing effective SQL injection detection approaches has been a challenge in spite of extensive research in this area. In this paper, we propose a signature based SQL injection attack detection framework by integrating fingerprinting method and Pattern Matching to distinguish genuine SQL queries from malicious queries. Our framework monitors SQL queries to the database and compares them against a dataset of signatures from known SQL injection attacks. If the fingerprint method cannot determine the legitimacy of query alone, then the Aho Corasick algorithm is invoked to ascertain whether attack signatures appear in the queries. The initial experimental results of our framework indicate the approach can identify wide variety of SQL injection attacks with negligible impact on performance.
Given social media users' plethora of interactions, appropriately controlling access to such information becomes a challenging task for users. Selecting the appropriate audience, even from within their own friend network, can be fraught with difficulties. PACMAN is a potential solution for this dilemma problem. It's a personal assistant agent that recommends personalized access control decisions based on the social context of any information disclosure by incorporating communities generated from the user's network structure and utilizing information in the user's profile. PACMAN provides accurate recommendations while minimizing intrusiveness.
The ubiquity of the Internet and email, have provided a mostly insecure communication medium for the consumer. During the last few decades, we have seen the development of several ways to secure email messages. However, these solutions are inflexible and difficult to use for encrypting email messages to protect security and privacy while communicating or collaborating via email. Under the current paradigm, the arduous process of setting up email encryption is non-intuitive for the average user. The complexity of the current practices has also yielded to incorrect developers' interpretation of architecture which has resulted in interoperability issues. As a result, the lack of simple and easy-to-use infrastructure in current practices means that the consumers still use plain text emails over insecure networks. In this paper, we introduce and describe a novel, holistic model with new techniques for protecting email messages. The architecture of our innovative model is simpler and easier to use than those currently employed. We use the simplified trust model, which can relieve users from having to perform many complex steps to achieve email security. Utilizing the new techniques presented in this paper can safeguard users' email from unauthorized access and protect their privacy. In addition, a simplified infrastructure enables developers to understand the architecture more readily eliminating interoperability.
With the integration of computing, communication, and physical processes, the modern power grid is becoming a large and complex cyber physical power system (CPPS). This trend is intended to modernize and improve the efficiency of the power grid, yet it makes the CPPS vulnerable to potential cascading failures caused by cyber-attacks, e.g., the attacks that are originated by the cyber network of CPPS. To prevent these risks, it is essential to analyze how cyber-attacks can be conducted against the CPPS and how they can affect the power systems. In light of that General Packet Radio Service (GPRS) has been widely used in CPPS, this paper provides a case study by examining possible cyber-attacks against the cyber-physical power systems with GPRS-based SCADA system. We analyze the vulnerabilities of GPRS-based SCADA systems and focus on DoS attacks and message spoofing attacks. Furthermore, we show the consequence of these attacks against power systems by a simulation using the IEEE 9-node system, and the results show the validity of cascading failures propagated through the systems under our proposed attacks.
Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.
It is a well-known fact that nowadays access to sensitive information is being performed through the use of a three-tier-architecture. Web applications have become a handy interface between users and data. As database-driven web applications are being used more and more every day, web applications are being seen as a good target for attackers with the aim of accessing sensitive data. If an organization fails to deploy effective data protection systems, they might be open to various attacks. Governmental organizations, in particular, should think beyond traditional security policies in order to achieve proper data protection. It is, therefore, imperative to perform security testing and make sure that there are no holes in the system, before an attack happens. One of the most commonly used web application attacks is by insertion of an SQL query from the client side of the application. This attack is called SQL Injection. Since an SQL Injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. To overcome the SQL injection problems, there is a need to use different security systems. In this paper, we will use 3 different scenarios for testing security systems. Using Penetration testing technique, we will try to find out which is the best solution for protecting sensitive data within the government network of Kosovo.
In several critical military missions, more than one decision level are involved. These decision levels are often independent and distributed, and sensitive pieces of information making up the military mission must be kept hidden from one level to another even if all of the decision levels cooperate to accomplish the same task. Usually, a mission is negotiated through insecure networks such as the Internet using cryptographic protocols. In such protocols, few security properties have to be ensured. However, designing a secure cryptographic protocol that ensures several properties at once is a very challenging task. In this paper, we propose a new secure protocol for multipart military missions that involve two independent and distributed decision levels having different security levels. We show that it ensures the secrecy, authentication, and non-repudiation properties. In addition, we show that it resists against man-in-the-middle attacks.
Software-defined networking (SDN) is enabling radically easier deployment of new routing infrastructures in enterprise and operator networks. However, it is not clear how to best exploit this flexibility, when also considering the migration costs. In this paper, we use tools from network economics to study a recent proposal of using information-centric networking (ICN) principles on an SDN infrastructure for improving the delivery of Internet Protocol (IP) services. The key value proposition of this IP-over-ICN approach is to use the native and lightweight multicast service delivery enabled by the ICN technology to reduce network load by removing redundant data. Our analysis shows that for services where IP multicast delivery is technically feasible, IP-over-ICN deployments are economically sensible if only few users will access the given service simultaneously. However, for services where native IP multicast is not a technically feasible option, such as for dynamically generated or personalized content, IP-over-ICN significantly outperforms IP.
Detection and prevention of data breaches in corporate networks is one of the most important security problems of today's world. The techniques and applications proposed for solution are not successful when attackers attempt to steal data using steganography. Steganography is the art of storing data in a file called cover, such as picture, sound and video. The concealed data cannot be directly recognized in the cover. Steganalysis is the process of revealing the presence of embedded messages in these files. There are many statistical and signature based steganalysis algorithms. In this work, the detection of steganographic images with steganalysis techniques is reviewed and a system has been developed which automatically detects steganographic images in network traffic by using open source tools.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.