Biblio
Software defined networking promises network operators to dramatically simplify network management. It provides flexibility and innovation through network programmability. With SDN, network management moves from codifying functionality in terms of low-level device configuration to building software that facilitates network management and debugging[1]. SDN provides new techniques to solve long-standing problems in networking like routing by separating the complexity of state distribution from network specification. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the major issue and a striking challenge that reduces the growth of SDNs. Moreover the introduction of various architectural components and up cycling of novel entities of SDN poses new security issues and threats. SDN is considered as major target for digital threats and cyber-attacks[2] and have more devastating effects than simple networks. Initial SDN design doesn't considered security as its part; therefore, it must be raised on the agenda. This article discusses the security solutions proposed to secure SDNs. We categorize the security solutions in the article by presenting a thematic taxonomy based on SDN architectural layers/interfaces[3], security measures and goals, simulation framework. Moreover, the literature also points out the possible attacks[2] targeting different layers/interfaces of SDNs. For securing SDNs, the potential requirements and their key enablers are also identified and presented. Also, the articles sketch the design of secure and dependable SDNs. At last, we discuss open issues and challenges of SDN security that may be rated appropriate to be handled by professionals and researchers in the future.
Controllers for software defined networks (SDNs) are quickly maturing to offer network operators more intuitive programming frameworks and greater abstractions for network application development. Likewise, many security solutions now exist within SDN environments for detecting and blocking clients who violate network policies. However, many of these solutions stop at triggering the security measure and give little thought to amending it. As a consequence, once the violation is addressed, no clear path exists for reinstating the flagged client beyond having the network operator reset the controller or manually implement a state change via an external command. This presents a burden for the network and its clients and administrators. Hence, we present a security policy transition framework for revoking security measures in an SDN environment once said measures are activated.
Sego is a hypervisor-based system that gives strong privacy and integrity guarantees to trusted applications, even when the guest operating system is compromised or hostile. Sego verifies operating system services, like the file system, instead of replacing them. By associating trusted metadata with user data across all system devices, Sego verifies system services more efficiently than previous systems, especially services that depend on data contents. We extensively evaluate Sego's performance on real workloads and implement a kernel fault injector to validate Sego's file system-agnostic crash consistency and recovery protocol.
The usage of Information and Communication Technologies (ICTs) pervades everyday's life. If it is true that ICT contributed to improve the quality of our life, it is also true that new forms of (cyber)crime have emerged in this setting. The diversity and amount of information forensic investigators need to cope with, when tackling a cyber-crime case, call for tools and techniques where knowledge is the main actor. Current approaches leave to the investigator the chore of integrating the diverse sources of evidence relevant for a case thus hindering the automatic generation of reusable knowledge. This paper describes an architecture that lifts the classical phases of a digital forensic investigation to a knowledge-driven setting. We discuss how the usage of languages and technologies originating from the Semantic Web proposal can complement digital forensics tools so that knowledge becomes a first-class citizen. Our architecture enables to perform in an integrated way complex forensic investigations and, as a by-product, build a knowledge base that can be consulted to gain insights from previous cases. Our proposal has been inspired by real-world scenarios emerging in the context of an Italian research project about cyber security.
Utility computing is being gradually realized as exemplified by cloud computing. Outsourcing computing and storage to global-scale cloud providers benefits from high accessibility, flexibility, scalability, and cost-effectiveness. However, users are uneasy outsourcing the storage of sensitive data due to security concerns. We address this problem by presenting SeMiNAS–-an efficient middleware system that allows files to be securely outsourced to providers and shared among geo-distributed offices. SeMiNAS achieves end-to-end data integrity and confidentiality with a highly efficient authenticated-encryption scheme. SeMiNAS leverages advanced NFSv4 features, including compound procedures and data-integrity extensions, to minimize extra network round trips caused by security meta-data. SeMiNAS also caches remote files locally to reduce accesses to providers over WANs. We designed, implemented, and evaluated SeMiNAS, which demonstrates a small performance penalty of less than 26% and an occasional performance boost of up to 19% for Filebench workloads.
Encrypting Internet communications has been the subject of renewed focus in recent years. In order to add end-to-end encryption to legacy applications without losing the convenience of full-text search, ShadowCrypt and Mimesis Aegis use a new cryptographic technique called "efficiently deployable efficiently searchable encryption" (EDESE) that allows a standard full-text search system to perform searches on encrypted data. Compared to other recent techniques for searching on encrypted data, EDESE schemes leak a great deal of statistical information about the encrypted messages and the keywords they contain. Until now, the practical impact of this leakage has been difficult to quantify. In this paper, we show that the adversary's task of matching plaintext keywords to the opaque cryptographic identifiers used in EDESE can be reduced to the well-known combinatorial optimization problem of weighted graph matching (WGM). Using real email and chat data, we show how off-the-shelf WGM solvers can be used to accurately and efficiently recover hundreds of the most common plaintext keywords from a set of EDESE-encrypted messages. We show how to recover the tags from Bloom filters so that the WGM solver can be used with the set of encrypted messages that utilizes a Bloom filter to encode its search tags. We also show that the attack can be mitigated by carefully configuring Bloom filter parameters.
Biometric authentication schemes are frequently used to establish the identity of a user. Often, a trusted hardware device is used to decide if a provided biometric feature is sufficiently close to the features stored by the legitimate user during enrollment. In this paper, we address the question whether the stored features can be extracted with side-channel attacks. We consider several models for types of leakage that are relevant specifically for fingerprint verification, and show results for attacks against the Bozorth3 and a custom matching algorithm. This work shows an interesting path for future research on the susceptibility of biometric algorithms towards side-channel attacks.
Providing a global understanding of privacy is crucial, because everything is connected. Nowadays companies are providing their customers with more services that will give them more access to their data and daily activity; electricity companies are marketing the new smart meters as a new service with great benefit to reduce the electricity usage by monitoring the electricity reading in real time. Although the users might benefit from this extra service, it will compromise the privacy of the users by having constant access to the readings. Since the smart meters will provide the users with real electricity readings, they will be able to decide and identify which devices are consuming energy in that specific moment and how much it will cost. This kind of information can be exploited by numerous types of people. Unauthorized use of this information is an invasion of privacy and may lead to much more severe consequences. This paper will propose an algorithm approach for the comparison and analysis of Smart Meter data readings, considering the time and temperature factors at each second to identify the use patterns at each house by identifying the appliances activities at each second in time complexity O(log(m)).
The exploitation of the opportunistic infrastructure via Device-to-Device (D2D) communication is a critical component towards the adoption of new paradigms such as edge and fog computing. While a lot of work has demonstrated the great potential of D2D communication, it is still unclear whether the benefits of the D2D approach can really be leveraged in practice. In this paper, we develop a software sensor, namely Detector, which senses the infrastructure in proximity of a mobile user. We analyze and evaluate D2D on the wild, i.e., not in simulations. We found that in a realistic environment, a mobile is always co-located in proximity to at least one other mobile device throughout the day. This suggests that a device can schedule tasks processing in coordination with other devices, potentially more powerful, instead of handling the processing of the tasks by itself.
With the rising interest of expedient, safe, and high-efficient transportation, vehicular ad hoc networks (VANETs) have turned into a critical technology in smart transportation systems. Because of the high mobility of nodes, VANETs are vulnerable to security attacks. In this paper, we propose a novel framework of software-defined VANETs with trust management. Specifically, we separate the forwarding plane in VANETs from the control plane, which is responsible for the control functionality, such as routing protocols and trust management in VANETs. Using the on-demand distance vector routing (TAODV) protocol as an example, we present a routing protocol named software-defined trust based ad hoc on-demand distance vector routing (SD-TAODV). Simulation results are presented to show the effectiveness of the proposed software-defined VANETs with trust management.
DDoS-for-hire services, also known as booters, have commoditized DDoS attacks and enabled abusive subscribers of these services to cheaply extort, harass and intimidate businesses and people by taking them offline. However, due to the underground nature of these booters, little is known about their underlying technical and business structure. In this paper, we empirically measure many facets of their technical and payment infrastructure. We also perform an analysis of leaked and scraped data from three major booters–-Asylum Stresser, Lizard Stresser and VDO–-which provides us with an in-depth view of their customers and victims. Finally, we conduct a large-scale payment intervention in collaboration with PayPal and evaluate its effectiveness as a deterrent to their operations. Based on our analysis, we show that these booters are responsible for hundreds of thousands of DDoS attacks and identify potentially promising methods to undermine these services by increasing their costs of operation.
In this paper, we apply verifiable computing techniques to a biometric matching. The purpose of verifiable computation is to give the result of a computation along with a proof that the calculations were correctly performed. We adapt a protocol called sumcheck protocol and present a system that performs verifiable biometric matching in the case of a fast border control. This is a work in progress and we focus on verifying an inner product. We then give some experimental results of its implementation. Verifiable computation here helps to enforce the authentication phase bringing in the process a proof that the biometric verification has been correctly performed.
Security is concerned with protecting assets. The aspects of security can be applied to any situation- defense, detection and deterrence. Network security plays important role of protecting information, hardware and software on a computer network. Denial of service (DOS) attacks causes great impacts on the internet world. These attacks attempt to disrupt legitimate user's access to services. By exploiting computer's vulnerabilities, attackers easily consume victim's resources. Many special techniques have been developed to protest against DOS attacks. Some organizations constitute several defense mechanism tools to tackle the security problems. This paper has proposed various types of attacks and solutions associated with each layers of OSI model. These attacks and solutions have different impacts on the different environment. Thus the rapid growth of new technologies may constitute still worse impacts of attacks in the future.
Mobile Ad Hoc Network (MANET) is a multi-hop temporary and autonomic network comprised of a set of mobile nodes. MANETs have the features of non-center, dynamically changing topology, multi-hop routing, mobile nodes, limited resources and so on, which make it face more threats. Trust evaluation is used to support nodes to cooperate in a secure and trustworthy way through evaluating the trust of participating nodes in MANETs. However, many trust evaluation models proposed for MANETs still have many problems and shortcomings. In this paper, we review the existing researches, then analyze and compare the proposed trust evaluation models by presenting and applying uniform criteria in order to point out a number of open issues and challenges and suggest future research trends.
Formal specification is a vital ingredient to scalable verification of software systems. In the case of efficient implementations of concurrent objects like atomic registers, queues, and locks, symbolic formal representations of their abstract data types (ADTs) enable efficient modular reasoning, decoupling clients from implementations. Writing adequate formal specifications, however, is a complex task requiring rare expertise. In practice, programmers write reference implementations as informal specifications. In this work we demonstrate that effective symbolic ADT representations can be automatically generated from the executions of reference implementations. Our approach exploits two key features of naturally-occurring ADTs: violations can be decomposed into a small set of representative patterns, and these patterns manifest in executions with few operations. By identifying certain algebraic properties of naturally-occurring ADTs, and exhaustively sampling executions up to a small number of operations, we generate concise symbolic ADT representations which are complete in practice, enabling the application of efficient symbolic verification algorithms without the burden of manual specification. Furthermore, the concise ADT violation patterns we generate are human-readable, and can serve as useful, formal documentation.
Targeted attacks on IT systems are a rising threat against the confidentiality of sensitive data and the availability of systems and infrastructures. Planning for the eventuality of a data breach or sabotage attack has become an increasingly difficult task with the emergence of advanced persistent threats (APTs), a class of highly sophisticated cyber-attacks that are nigh impossible to detect using conventional signature-based systems. Understanding, interpreting, and correlating the particulars of such advanced targeted attacks is a major research challenge that needs to be tackled before behavior-based approaches can evolve from their current state to truly semantics-aware solutions. Ontologies offer a versatile foundation well suited for depicting the complex connections between such behavioral data and the diverse technical and organizational properties of an IT system. In order to facilitate the development of novel behavior-based detection systems, we present TAON, an OWL-based ontology offering a holistic view on actors, assets, and threat details, which are mapped to individual abstracted events and anomalies that can be detected by today's monitoring data providers. TOAN offers a straightforward means to plan an organization's defense against APTs and helps to understand how, why, and by whom certain resources are targeted. Populated by concrete data, the proposed ontology becomes a smart correlation framework able to combine several data sources into a semantic assessment of any targeted attack.
In order to retrieve the secret key in a side-channel attack, the attacker computes distinguisher values using all the available data. A profiling stage is very useful to provide some a priori information about the leakage model. However, profiling is essentially empirical and may not be exhaustive. Therefore, during the attack, the attacker may come up on previously unseen data, which can be troublesome. A lazy workaround is to ignore all such novel observations altogether. In this paper, we show that this is not optimal and can be avoided. Our proposed techniques eventually improve the performance of classical information-theoretic distinguishers in terms of success rate.
Texting while driving has emerged as a significant threat to citizen safety. In this study, we utilize general deterrence theory (GDT), protection motivation theory and personality traits to evaluate texting while driving (TWD) compliance intentions among teenage drivers. This paper presents the results of our pilot study. We administered an online survey to 105 teenage and young adult drivers. The potential implications for research and practice and policy are discussed.
Temporal graphs are a class of graphs whose nodes and edges, together with the associated properties, continuously change over time. Recently, systems have been developed to support snapshot queries over temporal graphs. However, these systems barely support aggregate time range queries. Moreover, these systems cannot guarantee ACID transactions, an important feature for data management systems as long as concurrent processing is involved. To solve these issues, we design and develop TGraph, a temporal graph data management system, that assures the ACID transaction feature, and supports fast temporal graph queries.
Regarding Information and Communication Technologies (ICTs) in the public sector, electronic governance is the first emerged concept which has been recognized as an important issue in government's outreach to citizens since the early 1990s. The most important development of e-governance recently is Open Government Data, which provides citizens with the opportunity to freely access government data, conduct value-added applications, provide creative public services, and participate in different kinds of democratic processes. Open Government Data is expected to enhance the quality and efficiency of government services, strengthen democratic participation, and create interests for the public and enterprises. The success of Open Government Data hinges on its accessibility, quality of data, security policy, and platform functions in general. This article presents a robust assessment framework that not only provides a valuable understanding of the development of Open Government Data but also provides an effective feedback mechanism for mid-course corrections. We further apply the framework to evaluate the Open Government Data platform of the central government, on which open data of nine major government agencies are analyzed. Our research results indicate that Financial Supervisory Commission performs better than other agencies; especially in terms of the accessibility. Financial Supervisory Commission mostly provides 3-star or above dataset formats, and the quality of its metadata is well established. However, most of the data released by government agencies are regulations, reports, operations and other administrative data, which are not immediately applicable. Overall, government agencies should enhance the amount and quality of Open Government Data positively and continuously, also strengthen the functions of discussion and linkage of platforms and the quality of datasets. Aside from consolidating collaborations and interactions to open data communities, government agencies should improve the awareness and ability of personnel to manage and apply open data. With the improvement of the level of acceptance of open data among personnel, the quantity and quality of Open Government Data would enhance as well.
SW Quality Assessment models are either too broad such as CMMI-DEV and SPICE that cover the full software development life cycle (SDLC), or too narrow such as TMMI and TPI that focus on testing. Quality Management as a main concern within the software industry is broader than the concept of testing. The V-Model sets a broader view with the concepts of Verification and Validation. Quality Assurance (QA) is another broader term that includes quality of processes. Configuration audits add more scope. In parallel there are some less visible dimensions in quality not often addressed in traditional models such as business alignment of QA efforts. This paper compares the commonly accepted models related to software quality management and proposes a model that fills an empty space in this area. The paper provides some analysis of the concepts of maturity and capability levels and provides some proposed adaptations for quality management assessment.
Internet has been being becoming the most famous and biggest communication networks as social, industrial, and public infrastructure since Internet was invented at late 1960s. In a historical retrospect of Internet's evolution, the Internet architecture continues evolution repeatedly by going through various technical challenges, for instance, in early 1990s, Internet had encountered danger of scalability, after a short while it had been overcome and successfully evolved by applying emerging techniques such as CIDR, NAT, and IPv6. Especially this paper emphasizes scalability issues as technical challenges with forecasting that Internet of things era has come. Firstly, we describe the Identifier and locator separation scheme that can achieve dramatically architectural evolution in historical perspective. Additionally, it reviews various kinds of Identifier and locator separation scheme because recently the scheme can be the major design pillar towards future of Internet architecture such as both various clean-slated future Internet architectures and evolving Internet architectures. Lastly we show a result of analysis by analysis table for future of internet of everything where number of Internet connected devices will growth to more than 20 billion by 2020.
There are currently no requirements (technical or otherwise) that routing paths must be contained within national boundaries. Indeed, some paths experience international detours, i.e., originate in one country, cross international boundaries and return to the same country. In most cases these are sensible traffic engineering or peering decisions at ISPs that serve multiple countries. In some cases such detours may be suspicious. Characterizing international detours is useful to a number of players: (a) network engineers trying to diagnose persistent problems, (b) policy makers aiming at adhering to certain national communication policies, (c) entrepreneurs looking for opportunities to deploy new networks, or (d) privacy-conscious states trying to minimize the amount of internal communication traversing different jurisdictions. In this paper we characterize international detours in the Internet during the month of January 2016. To detect detours we sample BGP RIBs every 8 hours from 461 RouteViews and RIPE RIS peers spanning 30 countries. We use geolocation of ASes which geolocates each BGP prefix announced by each AS, mapping its presence at IXPs and geolocation infrastructure IPs. Finally, we analyze each global BGP RIB entry looking for detours. Our analysis shows more than 5K unique BGP prefixes experienced a detour. 132 prefixes experienced more than 50% of the detours. We observe about 544K detours. Detours either last for a few days or persist the entire month. Out of all the detours, more than 90% were transient detours that lasted for 72 hours or less. We also show different countries experience different characteristics of detours.
Opportunistic Situation Identification (OSI) is new paradigms for situation-aware systems, in which contexts for situation identification are sensed through sensors that happen to be available rather than pre-deployed and application-specific ones. OSI extends the application usage scale and reduces system costs. However, designing and implementing OSI module of situation-aware systems encounters several challenges, including the uncertainty of context availability, vulnerable network connectivity and privacy threat. This paper proposes a novel middleware framework to tackle such challenges, and its intuition is that it facilitates performing the situation reasoning locally on a smartphone without needing to rely on the cloud, thus reducing the dependency on the network and being more privacy-preserving. To realize such intuitions, we propose a hybrid learning approach to maximize the reasoning accuracy using limited phone's storage space, with the combination of two the-state-the-art techniques. Specifically, this paper provides a genetic algorithm based optimization approach to determine which pre-computed models will be selected for storage under the storage constraints. Validation of the approach based on an open dataset indicates that the proposed approach achieves higher accuracy with comparatively small storage cost. Further, the proposed utility function for model selection performs better than three baseline utility functions.
Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.