Visible to the public Biblio

Found 944 results

Filters: Keyword is Internet  [Clear All Filters]
2018-06-11
Kondo, D., Silverston, T., Tode, H., Asami, T., Perrin, O..  2017.  Risk analysis of information-leakage through interest packets in NDN. 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :360–365.

Information-leakage is one of the most important security issues in the current Internet. In Named-Data Networking (NDN), Interest names introduce novel vulnerabilities that can be exploited. By setting up a malware, Interest names can be used to encode critical information (steganography embedded) and to leak information out of the network by generating anomalous Interest traffic. This security threat based on Interest names does not exist in IP network, and it is essential to solve this issue to secure the NDN architecture. This paper performs risk analysis of information-leakage in NDN. We first describe vulnerabilities with Interest names and, as countermeasures, we propose a name-based filter using search engine information, and another filter using one-class Support Vector Machine (SVM). We collected URLs from the data repository provided by Common Crawl and we evaluate the performances of our per-packet filters. We show that our filters can choke drastically the throughput of information-leakage, which makes it easier to detect anomalous Interest traffic. It is therefore possible to mitigate information-leakage in NDN network and it is a strong incentive for future deployment of this architecture at the Internet scale.

Rohmah, Y. N., Sudiharto, D. W., Herutomo, A..  2017.  The performance comparison of forwarding mechanism between IPv4 and Named Data Networking (NDN). Case study: A node compromised by the prefix hijack. 2017 3rd International Conference on Science in Information Technology (ICSITech). :302–306.

Named Data Networking (NDN) is a new network architecture design that led to the evolution of a network architecture based on data-centric. Questions have been raised about how to compare its performance with the old architecture such as IP network which is generally based on Internet Protocol version 4 (IPv4). Differs with the old one, source and destination addresses in the delivery of data are not required on the NDN network because the addresses function is replaced by a data name (Name) which serves to identify the data uniquely. In a computer network, a network routing is an essential factor to support data communication. The network routing on IP network relies only on Routing Information Base (RIB) derived from the IP table on the router. So that, if there is a problem on the network such as there is one node exposed to a dangerous attack, the IP router should wait until the IP table is updated, and then the routing channel is changed. The issue of how to change the routing path without updating IP table has received considerable critical attention. The NDN network has an advantage such as its capability to execute an adaptive forwarding mechanism, which FIB (Forwarding Information Base) of the NDN router keeps information for routing and forwarding planes. Therefore, if there is a problem on the network, the NDN router can detect the problem more quickly than the IP router. The contribution of this study is important to explain the benefit of the forwarding mechanism of the NDN network compared to the IP network forwarding mechanism when there is a node which is suffered a hijack attack.

Zhang, X., Li, R., Zhao, W., Wu, R..  2017.  Detection of malicious nodes in NDN VANET for Interest Packet Popple Broadcast Diffusion Attack. 2017 11th IEEE International Conference on Anti-counterfeiting, Security, and Identification (ASID). :114–118.

As one of the next generation network architectures, Named Data Networking(NDN) which features location-independent addressing and content caching makes it more suitable to be deployed into Vehicular Ad-hoc Network(VANET). However, a new attack pattern is found when NDN and VANET combine. This new attack is Interest Packet Popple Broadcast Diffusion Attack (PBDA). There is no mitigation strategies to mitigate PBDA. In this paper a mitigation strategies called RVMS based on node reputation value (RV) is proposed to detect malicious nodes. The node calculates the neighbor node RV by direct and indirect RV evaluation and uses Markov chain predict the current RV state of the neighbor node according to its historical RV. The RV state is used to decide whether to discard the interest packet. Finally, the effectiveness of the RVMS is verified through modeling and experiment. The experimental results show that the RVMS can mitigate PBDA.

Zhang, X., Li, R., Zhao, H..  2017.  Neighbor-aware based forwarding strategy in NDN-MANET. 2017 11th IEEE International Conference on Anti-counterfeiting, Security, and Identification (ASID). :125–129.

Named Data Networking (NDN) is a future Internet architecture, NDN forwarding strategy is a hot research topic in MANET. At present, there are two categories of forwarding strategies in NDN. One is the blind forwarding(BF), the other is the aware forwarding(AF). Data packet return by the way that one came forwarding strategy(DRF) as one of the BF strategy may fail for the interruptions of the path that are caused by the mobility of nodes. Consumer need to wait until the interest packet times out to request the data packet again. To solve the insufficient of DRF, in this paper a Forwarding Strategy, called FN based on Neighbor-aware is proposed for NDN MANET. The node maintains the neighbor information and the request information of neighbor nodes. In the phase of data packet response, in order to improve request satisfaction rate, node specifies the next hop node; Meanwhile, in order to reduce packet loss rate, node assists the last hop node to forward packet to the specific node. The simulation results show that compared with DRF and greedy forwarding(GF) strategy, FN can improve request satisfaction rate when node density is high.

Wang, M., Zhang, Z., Xu, H..  2017.  DNS configurations and its security analyzing via resource records of the top-level domains. 2017 11th IEEE International Conference on Anti-counterfeiting, Security, and Identification (ASID). :21–25.

Top-level domains play an important role in domain name system. Close attention should be paid to security of top level domains. In this paper, we found many configuration anomalies of top-level domains by analyzing their resource records. We got resource records of top-level domains from root name servers and authoritative servers of top-level domains. By comparing these resource records, we observed the anomalies in top-level domains. For example, there are 8 servers shared by more than one hundred top-level domains; Some TTL fields or SERIAL fields of resource records obtained on each NS servers of the same top-level domain were inconsistent; some authoritative servers of top-level domains were unreachable. Those anomalies may affect the availability of top-level domains. We hope that these anomalies can draw top-level domain administrators' attention to security of top-level domains.

Wu, D., Xu, Z., Chen, B., Zhang, Y..  2017.  Towards Access Control for Network Coding-Based Named Data Networking. GLOBECOM 2017 - 2017 IEEE Global Communications Conference. :1–6.

Named Data Networking (NDN) is a content-oriented future Internet architecture, which well suits the increasingly mobile and information-intensive applications that dominate today's Internet. NDN relies on in-network caching to facilitate content delivery. This makes it challenging to enforce access control since the content has been cached in the routers and the content producer has lost the control over it. Due to its salient advantages in content delivery, network coding has been introduced into NDN to improve content delivery effectiveness. In this paper, we design ACNC, the first Access Control solution specifically for Network Coding-based NDN. By combining a novel linear AONT (All Or Nothing Transform) and encryption, we can ensure that only the legitimate user who possesses the authorization key can successfully recover the encoding matrix for network coding, and hence can recover the content being transmitted. In addition, our design has two salient merits: 1) the linear AONT well suits the linear nature of network coding; 2) only one vector of the encoding matrix needs to be encrypted/decrypted, which only incurs small computational overhead. Security analysis and experimental evaluation in ndnSIM show that our design can successfully enforce access control on network coding-based NDN with an acceptable overhead.

Ye, F., Qian, Y..  2017.  A Security Architecture for Networked Internet of Things Devices. GLOBECOM 2017 - 2017 IEEE Global Communications Conference. :1–6.

The Internet of Things (IoT) increasingly demonstrates its role in smart services, such as smart home, smart grid, smart transportation, etc. However, due to lack of standards among different vendors, existing networked IoT devices (NoTs) can hardly provide enough security. Moreover, it is impractical to apply advanced cryptographic solutions to many NoTs due to limited computing capability and power supply. Inspired by recent advances in IoT demand, in this paper, we develop an IoT security architecture that can protect NoTs in different IoT scenarios. Specifically, the security architecture consists of an auditing module and two network-level security controllers. The auditing module is designed to have a stand-alone intrusion detection system for threat detection in a NoT network cluster. The two network-level security controllers are designed to provide security services from either network resource management or cryptographic schemes regardless of the NoT security capability. We also demonstrate the proposed IoT security architecture with a network based one-hop confidentiality scheme and a cryptography-based secure link mechanism.

2018-06-07
Larisch, J., Choffnes, D., Levin, D., Maggs, B. M., Mislove, A., Wilson, C..  2017.  CRLite: A Scalable System for Pushing All TLS Revocations to All Browsers. 2017 IEEE Symposium on Security and Privacy (SP). :539–556.

Currently, no major browser fully checks for TLS/SSL certificate revocations. This is largely due to the fact that the deployed mechanisms for disseminating revocations (CRLs, OCSP, OCSP Stapling, CRLSet, and OneCRL) are each either incomplete, insecure, inefficient, slow to update, not private, or some combination thereof. In this paper, we present CRLite, an efficient and easily-deployable system for proactively pushing all TLS certificate revocations to browsers. CRLite servers aggregate revocation information for all known, valid TLS certificates on the web, and store them in a space-efficient filter cascade data structure. Browsers periodically download and use this data to check for revocations of observed certificates in real-time. CRLite does not require any additional trust beyond the existing PKI, and it allows clients to adopt a fail-closed security posture even in the face of network errors or attacks that make revocation information temporarily unavailable. We present a prototype of name that processes TLS certificates gathered by Rapid7, the University of Michigan, and Google's Certificate Transparency on the server-side, with a Firefox extension on the client-side. Comparing CRLite to an idealized browser that performs correct CRL/OCSP checking, we show that CRLite reduces latency and eliminates privacy concerns. Moreover, CRLite has low bandwidth costs: it can represent all certificates with an initial download of 10 MB (less than 1 byte per revocation) followed by daily updates of 580 KB on average. Taken together, our results demonstrate that complete TLS/SSL revocation checking is within reach for all clients.

Chariton, A. A., Degkleri, E., Papadopoulos, P., Ilia, P., Markatos, E. P..  2017.  CCSP: A compressed certificate status protocol. IEEE INFOCOM 2017 - IEEE Conference on Computer Communications. :1–9.

Trust in SSL-based communications is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers, may trick web browsers to trust a revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers controlled by cyber-attackers. Although frequently updated, nonced, and timestamped certificates may reduce the frequency and impact of such cyber-attacks, they impose a very large overhead to the CAs and OCSP servers, which now need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this overhead and provide a solution to the described cyber-attacks, we present CCSP: a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called signed collections. In this paper, we present the design, preliminary implementation, and evaluation of CCSP in general, and signed collections in particular. Our preliminary results suggest that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by 6 orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.

Berkowsky, J., Rana, N., Hayajneh, T..  2017.  CAre: Certificate Authority Rescue Engine for Proactive Security. 2017 14th International Symposium on Pervasive Systems, Algorithms and Networks 2017 11th International Conference on Frontier of Computer Science and Technology 2017 Third International Symposium of Creative Computing (ISPAN-FCST-ISCC). :79–86.

Cryptography and encryption is a topic that is blurred by its complexity making it difficult for the majority of the public to easily grasp. The focus of our research is based on SSL technology involving CAs, a centralized system that manages and issues certificates to web servers and computers for validation of identity. We first explain how the certificate provides a secure connection creating a trust between two parties looking to communicate with one another over the internet. Then the paper goes into what happens when trust is compromised and how information that is being transmitted could possibly go into the hands of the wrong person. We are proposing a browser plugin, Certificate Authority Rescue Engine (CAre), to serve as an added source of security with simplicity and visibility. In order to see why CAre will be an added benefit to average and technical users of the internet, one must understand what website security entails. Therefore, this paper will dive deep into website security through the use of public key infrastructure and its core components; certificates, certificate authorities, and their relationship with web browsers.

Appelt, D., Panichella, A., Briand, L..  2017.  Automatically Repairing Web Application Firewalls Based on Successful SQL Injection Attacks. 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). :339–350.

Testing and fixing Web Application Firewalls (WAFs) are two relevant and complementary challenges for security analysts. Automated testing helps to cost-effectively detect vulnerabilities in a WAF by generating effective test cases, i.e., attacks. Once vulnerabilities have been identified, the WAF needs to be fixed by augmenting its rule set to filter attacks without blocking legitimate requests. However, existing research suggests that rule sets are very difficult to understand and too complex to be manually fixed. In this paper, we formalise the problem of fixing vulnerable WAFs as a combinatorial optimisation problem. To solve it, we propose an automated approach that combines machine learning with multi-objective genetic algorithms. Given a set of legitimate requests and bypassing SQL injection attacks, our approach automatically infers regular expressions that, when added to the WAF's rule set, prevent many attacks while letting legitimate requests go through. Our empirical evaluation based on both open-source and proprietary WAFs shows that the generated filter rules are effective at blocking previously identified and successful SQL injection attacks (recall between 54.6% and 98.3%), while triggering in most cases no or few false positives (false positive rate between 0% and 2%).

Ghafarian, A..  2017.  A hybrid method for detection and prevention of SQL injection attacks. 2017 Computing Conference. :833–838.

SQL injection attack (SQLIA) pose a serious security threat to the database driven web applications. This kind of attack gives attackers easily access to the application's underlying database and to the potentially sensitive information these databases contain. A hacker through specifically designed input, can access content of the database that cannot otherwise be able to do so. This is usually done by altering SQL statements that are used within web applications. Due to importance of security of web applications, researchers have studied SQLIA detection and prevention extensively and have developed various methods. In this research, after reviewing the existing research in this field, we present a new hybrid method to reduce the vulnerability of the web applications. Our method is specifically designed to detect and prevent SQLIA. Our proposed method is consists of three phases namely, the database design, implementation, and at the common gateway interface (CGI). Details of our approach along with its pros and cons are discussed in detail.

Appiah, B., Opoku-Mensah, E., Qin, Z..  2017.  SQL injection attack detection using fingerprints and pattern matching technique. 2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS). :583–587.

Web-Based applications are becoming more increasingly technically complex and sophisticated. The very nature of their feature-rich design and their capability to collate, process, and disseminate information over the Internet or from within an intranet makes them a popular target for attack. According to Open Web Application Security Project (OWASP) Top Ten Cheat sheet-2017, SQL Injection Attack is at peak among online attacks. This can be attributed primarily to lack of awareness on software security. Developing effective SQL injection detection approaches has been a challenge in spite of extensive research in this area. In this paper, we propose a signature based SQL injection attack detection framework by integrating fingerprinting method and Pattern Matching to distinguish genuine SQL queries from malicious queries. Our framework monitors SQL queries to the database and compares them against a dataset of signatures from known SQL injection attacks. If the fingerprint method cannot determine the legitimacy of query alone, then the Aho Corasick algorithm is invoked to ascertain whether attack signatures appear in the queries. The initial experimental results of our framework indicate the approach can identify wide variety of SQL injection attacks with negligible impact on performance.

Lodeiro-Santiago, Moisés, Caballero-Gil, Cándido, Caballero-Gil, Pino.  2017.  Collaborative SQL-injections Detection System with Machine Learning. Proceedings of the 1st International Conference on Internet of Things and Machine Learning. :45:1–45:5.
Data mining and information extraction from data is a field that has gained relevance in recent years thanks to techniques based on artificial intelligence and use of machine and deep learning. The main aim of the present work is the development of a tool based on a previous behaviour study of security audit tools (oriented to SQL pentesting) with the purpose of creating testing sets capable of performing an accurate detection of a SQL attack. The study is based on the information collected through the generated web server logs in a pentesting laboratory environment. Then, making use of the common extracted patterns from the logs, each attack vector has been classified in risk levels (dangerous attack, normal attack, non-attack, etc.). Finally, a training with the generated data was performed in order to obtain a classifier system that has a variable performance between 97 and 99 percent in positive attack detection. The training data is shared to other servers in order to create a distributed network capable of deciding if a query is an attack or is a real petition and inform to connected clients in order to block the petitions from the attacker's IP.
2018-05-30
Misra, G., Such, J. M..  2017.  PACMAN: Personal Agent for Access Control in Social Media. IEEE Internet Computing. 21:18–26.

Given social media users' plethora of interactions, appropriately controlling access to such information becomes a challenging task for users. Selecting the appropriate audience, even from within their own friend network, can be fraught with difficulties. PACMAN is a potential solution for this dilemma problem. It's a personal assistant agent that recommends personalized access control decisions based on the social context of any information disclosure by incorporating communities generated from the user's network structure and utilizing information in the user's profile. PACMAN provides accurate recommendations while minimizing intrusiveness.

Nourai, M., Levkowitz, H..  2017.  Securing Email for the Average Users via a New Architecture. 2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM). :1–6.

The ubiquity of the Internet and email, have provided a mostly insecure communication medium for the consumer. During the last few decades, we have seen the development of several ways to secure email messages. However, these solutions are inflexible and difficult to use for encrypting email messages to protect security and privacy while communicating or collaborating via email. Under the current paradigm, the arduous process of setting up email encryption is non-intuitive for the average user. The complexity of the current practices has also yielded to incorrect developers' interpretation of architecture which has resulted in interoperability issues. As a result, the lack of simple and easy-to-use infrastructure in current practices means that the consumers still use plain text emails over insecure networks. In this paper, we introduce and describe a novel, holistic model with new techniques for protecting email messages. The architecture of our innovative model is simpler and easier to use than those currently employed. We use the simplified trust model, which can relieve users from having to perform many complex steps to achieve email security. Utilizing the new techniques presented in this paper can safeguard users' email from unauthorized access and protect their privacy. In addition, a simplified infrastructure enables developers to understand the architecture more readily eliminating interoperability.

2018-05-24
Zhang, T., Wang, Y., Liang, X., Zhuang, Z., Xu, W..  2017.  Cyber Attacks in Cyber-Physical Power Systems: A Case Study with GPRS-Based SCADA Systems. 2017 29th Chinese Control And Decision Conference (CCDC). :6847–6852.

With the integration of computing, communication, and physical processes, the modern power grid is becoming a large and complex cyber physical power system (CPPS). This trend is intended to modernize and improve the efficiency of the power grid, yet it makes the CPPS vulnerable to potential cascading failures caused by cyber-attacks, e.g., the attacks that are originated by the cyber network of CPPS. To prevent these risks, it is essential to analyze how cyber-attacks can be conducted against the CPPS and how they can affect the power systems. In light of that General Packet Radio Service (GPRS) has been widely used in CPPS, this paper provides a case study by examining possible cyber-attacks against the cyber-physical power systems with GPRS-based SCADA system. We analyze the vulnerabilities of GPRS-based SCADA systems and focus on DoS attacks and message spoofing attacks. Furthermore, we show the consequence of these attacks against power systems by a simulation using the IEEE 9-node system, and the results show the validity of cascading failures propagated through the systems under our proposed attacks.

Priya, K., ArokiaRenjit, J..  2017.  Data Security and Confidentiality in Public Cloud Storage by Extended QP Protocol. 2017 International Conference on Computation of Power, Energy Information and Commuincation (ICCPEIC). :235–240.

Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.

Maraj, A., Rogova, E., Jakupi, G., Grajqevci, X..  2017.  Testing Techniques and Analysis of SQL Injection Attacks. 2017 2nd International Conference on Knowledge Engineering and Applications (ICKEA). :55–59.

It is a well-known fact that nowadays access to sensitive information is being performed through the use of a three-tier-architecture. Web applications have become a handy interface between users and data. As database-driven web applications are being used more and more every day, web applications are being seen as a good target for attackers with the aim of accessing sensitive data. If an organization fails to deploy effective data protection systems, they might be open to various attacks. Governmental organizations, in particular, should think beyond traditional security policies in order to achieve proper data protection. It is, therefore, imperative to perform security testing and make sure that there are no holes in the system, before an attack happens. One of the most commonly used web application attacks is by insertion of an SQL query from the client side of the application. This attack is called SQL Injection. Since an SQL Injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. To overcome the SQL injection problems, there is a need to use different security systems. In this paper, we will use 3 different scenarios for testing security systems. Using Penetration testing technique, we will try to find out which is the best solution for protecting sensitive data within the government network of Kosovo.

2018-05-16
Fattahi, J., Mejri, M., Ziadia, M., Ghayoula, E., Samoud, O., Pricop, E..  2017.  Cryptographic protocol for multipart missions involving two independent and distributed decision levels in a military context. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :1127–1132.

In several critical military missions, more than one decision level are involved. These decision levels are often independent and distributed, and sensitive pieces of information making up the military mission must be kept hidden from one level to another even if all of the decision levels cooperate to accomplish the same task. Usually, a mission is negotiated through insecure networks such as the Internet using cryptographic protocols. In such protocols, few security properties have to be ensured. However, designing a secure cryptographic protocol that ensures several properties at once is a very challenging task. In this paper, we propose a new secure protocol for multipart military missions that involve two independent and distributed decision levels having different security levels. We show that it ensures the secrecy, authentication, and non-repudiation properties. In addition, we show that it resists against man-in-the-middle attacks.

2018-05-09
Jillepalli, A. A., Leon, D. C. d, Steiner, S., Sheldon, F. T., Haney, M. A..  2017.  Hardening the Client-Side: A Guide to Enterprise-Level Hardening of Web Browsers. 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :687–692.
Today, web browsers are a major avenue for cyber-compromise and data breaches. Web browser hardening, through high-granularity and least privilege tailored configurations, can help prevent or mitigate many of these attack avenues. For example, on a classic client desktop infrastructure, an enforced configuration that enables users to use one browser to connect to critical and trusted websites and a different browser for un-trusted sites, with the former restricted to trusted sites and the latter with JavaScript and Plugins disabled by default, may help prevent most JavaScript and Plugin-based attacks to critical enterprise sites. However, most organizations, today, still allow web browsers to run with their default configurations and allow users to use the same browser to connect to trusted and un-trusted sites alike. In this article, we present detailed steps for remotely hardening multiple web browsers in a Windows-based enterprise, for Internet Explorer and Google Chrome. We hope that system administrators use this guide to jump-start an enterprise-wide strategy for implementing high-granularity and least privilege browser hardening. This will help secure enterprise systems at the front-end in addition to the network perimeter.
Douros, V. G., Riihijärvi, J., Mähönen, P..  2017.  Network economics of SDN-based infrastructures: Can we unlock value through ICN multicast? 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). :1–5.

Software-defined networking (SDN) is enabling radically easier deployment of new routing infrastructures in enterprise and operator networks. However, it is not clear how to best exploit this flexibility, when also considering the migration costs. In this paper, we use tools from network economics to study a recent proposal of using information-centric networking (ICN) principles on an SDN infrastructure for improving the delivery of Internet Protocol (IP) services. The key value proposition of this IP-over-ICN approach is to use the native and lightweight multicast service delivery enabled by the ICN technology to reduce network load by removing redundant data. Our analysis shows that for services where IP multicast delivery is technically feasible, IP-over-ICN deployments are economically sensible if only few users will access the given service simultaneously. However, for services where native IP multicast is not a technically feasible option, such as for dynamically generated or personalized content, IP-over-ICN significantly outperforms IP.

Yu, L., Wang, Q., Barrineau, G., Oakley, J., Brooks, R. R., Wang, K. C..  2017.  TARN: A SDN-based traffic analysis resistant network architecture. 2017 12th International Conference on Malicious and Unwanted Software (MALWARE). :91–98.
Destination IP prefix-based routing protocols are core to Internet routing today. Internet autonomous systems (AS) possess fixed IP prefixes, while packets carry the intended destination AS's prefix in their headers, in clear text. As a result, network communications can be easily identified using IP addresses and become targets of a wide variety of attacks, such as DNS/IP filtering, distributed Denial-of-Service (DDoS) attacks, man-in-the-middle (MITM) attacks, etc. In this work, we explore an alternative network architecture that fundamentally removes such vulnerabilities by disassociating the relationship between IP prefixes and destination networks, and by allowing any end-to-end communication session to have dynamic, short-lived, and pseudo-random IP addresses drawn from a range of IP prefixes rather than one. The concept is seemingly impossible to realize in todays Internet. We demonstrate how this is doable today with three different strategies using software defined networking (SDN), and how this can be done at scale to transform the Internet addressing and routing paradigms with the novel concept of a distributed software defined Internet exchange (SDX). The solution works with both IPv4 and IPv6, whereas the latter provides higher degrees of IP addressing freedom. Prototypes based on Open vSwitches (OVS) have been implemented for experimentation across the PEERING BGP testbed. The SDX solution not only provides a technically sustainable pathway towards large-scale traffic analysis resistant network (TARN) support, it also unveils a new business model for customer-driven, customizable and trustable end-to-end network services.
2018-05-01
Erdem, Ö, Turan, M..  2017.  A Case Study for Automatic Detection of Steganographic Images in Network Traffic. 2017 10th International Conference on Electrical and Electronics Engineering (ELECO). :885–889.

Detection and prevention of data breaches in corporate networks is one of the most important security problems of today's world. The techniques and applications proposed for solution are not successful when attackers attempt to steal data using steganography. Steganography is the art of storing data in a file called cover, such as picture, sound and video. The concealed data cannot be directly recognized in the cover. Steganalysis is the process of revealing the presence of embedded messages in these files. There are many statistical and signature based steganalysis algorithms. In this work, the detection of steganographic images with steganalysis techniques is reviewed and a system has been developed which automatically detects steganographic images in network traffic by using open source tools.

Korczynski, M., Tajalizadehkhoob, S., Noroozian, A., Wullink, M., Hesselman, C., v Eeten, M..  2017.  Reputation Metrics Design to Improve Intermediary Incentives for Security of TLDs. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :579–594.

Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.