Biblio
We report on our research on proving the security of multi-party cryptographic protocols using the EASYCRYPT proof assistant. We work in the computational model using the sequence of games approach, and define honest-butcurious (semi-honest) security using a variation of the real/ideal paradigm in which, for each protocol party, an adversary chooses protocol inputs in an attempt to distinguish the party's real and ideal games. Our proofs are information-theoretic, instead of being based on complexity theory and computational assumptions. We employ oracles (e.g., random oracles for hashing) whose encapsulated states depend on dynamically-made, nonprogrammable random choices. By limiting an adversary's oracle use, one may obtain concrete upper bounds on the distances between a party's real and ideal games that are expressed in terms of game parameters. Furthermore, our proofs work for adaptive adversaries, ones that, when choosing the value of a protocol input, may condition this choice on their current protocol view and oracle knowledge. We provide an analysis in EASYCRYPT of a three party private count retrieval protocol. We emphasize the lessons learned from completing this proof.
Bitcoin, one major virtual currency, attracts users' attention by its novel mode in recent years. With blockchain as its basic technique, Bitcoin possesses strong security features which anonymizes user's identity to protect their private information. However, some criminals utilize Bitcoin to do several illegal activities bringing in great security threat to the society. Therefore, it is necessary to get knowledge of the current trend of Bitcoin and make effort to de-anonymize. In this paper, we put forward and realize a system to analyze Bitcoin from two aspects: blockchain data and network traffic data. We resolve the blockchain data to analyze Bitcoin from the point of Bitcoin address while simulate Bitcoin P2P protocol to evaluate Bitcoin from the point of IP address. At last, with our system, we finish analyzing its current trends and tracing its transactions by putting some statistics on Bitcoin transactions and addresses, tracing the transaction flow and de-anonymizing some Bitcoin addresses to IPs.
Internet of Things (IoT) is an emerging trend that is changing the way devices connect and communicate. Integration of cloud computing with IoT i.e. Cloud of Things (CoT) provide scalability, virtualized control and access to the services provided by IoT. Security issues are a major obstacle in widespread deployment and application of CoT. Among these issues, authentication and identification of user is crucial. In this study paper, survey of various authentication schemes is carried out. The aim of this paper is to study a multifactor authentication system which uses secret splitting in detail. The system uses exclusive-or operations, encryption algorithms and Diffie-Hellman key exchange algorithm to share key over the network. Security analysis shows the resistance of the system against different types of attacks.
Use of internet increases day by day so securing network and data is a big issue. So, it is very important to maintain security to ensure safe and trusted communication of information between different organizations. Because of these IDS is a very useful component of computer and network security. IDS system is used by many organizations or industries to detect the weakness in their security, documenting previous attacks and threats and preventing all of this from violating security policies. Because of these advantages, this system is important in system security. In this paper, we find a multilevel solution for different approaches (attacks) based on intrusion detection system. In this paper, we identify different attacks and find the solutions for different type of attacks such as DDOS, SQL injection and Brute force attack. In this case, we use client-server architecture. To implement this we maintain profile of user and base on this we find normal user or attacker when system find that attack is present then it directly block the attack.
In cloud storage systems, users can upload their data along with associated tags (authentication information) to cloud storage servers. To ensure the availability and integrity of the outsourced data, provable data possession (PDP) schemes convince verifiers (users or third parties) that the outsourced data stored in the cloud storage server is correct and unchanged. Recently, several PDP schemes with designated verifier (DV-PDP) were proposed to provide the flexibility of arbitrary designated verifier. A designated verifier (private verifier) is trustable and designated by a user to check the integrity of the outsourced data. However, these DV-PDP schemes are either inefficient or insecure under some circumstances. In this paper, we propose the first non-repudiable PDP scheme with designated verifier (DV-NRPDP) to address the non-repudiation issue and resolve possible disputations between users and cloud storage servers. We define the system model, framework and adversary model of DV-NRPDP schemes. Afterward, a concrete DV-NRPDP scheme is presented. Based on the computing discrete logarithm assumption, we formally prove that the proposed DV-NRPDP scheme is secure against several forgery attacks in the random oracle model. Comparisons with the previously proposed schemes are given to demonstrate the advantages of our scheme.
Attack graph technique is a common tool for the evaluation of network security. However, attack graphs are generally too large and complex to be understood and interpreted by security administrators. This paper proposes an analysis framework for security attack graphs for a given IT infrastructure system. First, in order to facilitate the discovery of interconnectivities among vulnerabilities in a network, multi-host multi-stage vulnerability analysis (MulVAL) is employed to generate an attack graph for a given network topology. Then a novel algorithm is applied to refine the attack graph and generate a simplified graph called a transition graph. Next, a Markov model is used to project the future security posture of the system. Finally, the framework is evaluated by applying it on a typical IT network scenario with specific services, network configurations, and vulnerabilities.
GENI (Global Environment for Network Innovations) is a National Science Foundation (NSF) funded program which provides a virtual laboratory for networking and distributed systems research and education. It is well suited for exploring networks at a scale, thereby promoting innovations in network science, security, services and applications. GENI allows researchers obtain compute resources from locations around the United States, connect compute resources using 100G Internet2 L2 service, install custom software or even custom operating systems on these compute resources, control how network switches in their experiment handle traffic flows, and run their own L3 and above protocols. GENI architecture incorporates cloud federation. With the federation, cloud resources can be federated and/or community of clouds can be formed. The heart of federation is user identity and an ability to “advertise” cloud resources into community including compute, storage, and networking. GENI administrators can carve out what resources are available to the community and hence a portion of GENI resources are reserved for internal consumption. GENI architecture also provides “stitching” of compute and storage resources researchers request. This provides L2 network domain over Internet2's 100G network. And researchers can run their Software Defined Networking (SDN) controllers on the provisioned L2 network domain for a complete control of networking traffic. This capability is useful for large science data transfer (bypassing security devices for high throughput). Renaissance Computing Institute (RENCI), a research institute in the state of North Carolina, has developed ORCA (Open Resource Control Architecture), a GENI control framework. ORCA is a distributed resource orchestration system to serve science experiments. ORCA provides compute resources as virtual machines and as well as baremetals. ORCA based GENI ra- k was designed to serve both High Throughput Computing (HTC) and High Performance Computing (HPC) type of computes. Although, GENI is primarily used in various universities and research entities today, GENI architecture can be leveraged in the commercial, aerospace and government settings. This paper will go over the architecture of GENI and discuss the GENI architecture for scientific computing experiments.
Web Application becomes the leading solution for the utilization of systems that need access globally, distributed, cost-effective, as well as the diversity of the content that can run on this technology. At the same time web application security have always been a major issue that must be considered due to the fact that 60% of Internet attacks targeting web application platform. One of the biggest impacts on this technology is Cross Site Scripting (XSS) attack, the most frequently occurred and are always in the TOP 10 list of Open Web Application Security Project (OWASP). Vulnerabilities in this attack occur in the absence of checking, testing, and the attention about secure coding practices. There are several alternatives to prevent the attacks that associated with this threat. Network Intrusion Detection System can be used as one solution to prevent the influence of XSS Attack. This paper investigates the XSS attack recognition and detection using regular expression pattern matching and a preprocessing method. Experiments are conducted on a testbed with the aim to reveal the behaviour of the attack.
Authentication is one of the key aspects of securing applications and systems alike. While in most existing systems this is achieved using usernames and passwords it has been continuously shown that this authentication method is not secure. Studies that have been conducted have shown that these systems have vulnerabilities which lead to cases of impersonation and identity theft thus there is need to improve such systems to protect sensitive data. In this research, we explore the combination of the user's location together with traditional usernames and passwords as a multi factor authentication system to make authentication more secure. The idea involves comparing a user's mobile device location with that of the browser and comparing the device's Bluetooth key with the key used during registration. We believe by leveraging existing technologies such as Bluetooth and GPS we can reduce implementation costs whilst improving security.
With all data services of cloud, it's not only stored the data, although shared the data among the multiple users or clients, which make doubt in its integrity due to the existence of software/hardware error along with human error too. There is an existence of several mechanisms to allow data holders and public verifiers to precisely, efficiently and effectively audit integrity of cloud data without accessing the whole data from server. After all, public auditing on the integrity of shared data with pervious extant mechanisms will somehow affirm the confidential information and its identity privacy to the public verifiers. In this paper, to achieve the privacy preserving public for auditing, we intended an explanation for TPA using three way handshaking protocol through the Extensible Authentication Protocol (EAP) with liberated encryption standard. Appropriately, from the cloud, we use the VerifyProof execute by TPA to audit to certify. In addition to this mechanism, the identity of each segment in the shared data is kept private from the public verifiers. Moreover, rather than verifying the auditing task one by one, this will capable to perform, the various auditing tasks simultaneously.
The increased number of cyber attacks makes the availability of services a major security concern. One common type of cyber threat is distributed denial of service (DDoS). A DDoS attack is aimed at disrupting the legitimate users from accessing the services. It is easier for an insider having legitimate access to the system to deceive any security controls resulting in insider attack. This paper proposes an Early Detection and Isolation Policy (EDIP)to mitigate insider-assisted DDoS attacks. EDIP detects insider among all legitimate clients present in the system at proxy level and isolate it from innocent clients by migrating it to attack proxy. Further an effective algorithm for detection and isolation of insider is developed with the aim of maximizing attack isolation while minimizing disruption to benign clients. In addition, concept of load balancing is used to prevent proxies from getting overloaded.
In this paper, we explore the usage of printed tags to authenticate products. Printed tags are a cheap alternative to RFID and other tag based systems and do not require specialized equipment. Due to the simplistic nature of such printed codes, many security issues like tag impersonation, server impersonation, reader impersonation, replay attacks and denial of service present in RFID based solutions need to be handled differently. We propose a cost-efficient scheme based on static tag based hash chains to address these security threats. We analyze the security characteristics of this scheme and compare it to other product authentication schemes that use RFID tags. Finally, we show that our proposed statically printed QR codes can be at least as secure as RFID tags.
This article presents PrOLoc, a localization system that combines partially homomorphic encryption with a new way of structuring the localization problem to enable emcient and accurate computation of a target's location while preserving the privacy of the observers.
Software Defined Networking (SDN) is a paradigm shift that changes the working principles of IP networks by separating the control logic from routers and switches, and logically centralizing it within a controller. In this architecture the control plane (controller) communicates with the data plane (switches) through a control channel using a standards-compliant protocol, that is, OpenFlow. While having a centralized controller creates an opportunity to monitor and program the entire network, as a side effect, it causes the control plane to become a single point of failure. Denial of service (DoS) attacks or even heavy control traffic conditions can easily become real threats to the proper functioning of the controller, which indirectly detriments the entire network. In this paper, we propose a solution to reduce the control traffic generated primarily during table-miss events. We utilize the buffer\_id feature of the OpenFlow protocol, which has been designed to identify individually buffered packets within a switch, reusing it to identify flows buffered as a series of packets during table-miss, which happens when there is no related rule in the switch flow tables that matches the received packet. Thus, we allow the OpenFlow switch to send only the first packet of a flow to the controller for a table-miss while buffering the rest of the packets in the switch memory until the controller responds or time out occurs. The test results show that OpenFlow traffic is significantly reduced when the proposed method is used.
Cloud computing presents unlimited prospects for Information Technology (IT) industry and business enterprises alike. Rapid advancement brings a dark underbelly of new vulnerabilities and challenges unfolding with alarming regularity. Although cloud technology provides a ubiquitous environment facilitating business enterprises to conduct business across disparate locations, security effectiveness of this platform interspersed with threats which can bring everything that subscribes to the cloud, to a halt raises questions. However advantages of cloud platforms far outweighs drawbacks and study of new challenges helps overcome drawbacks of this technology. One such emerging security threat is of ransomware attack on the cloud which threatens to hold systems and data on cloud network to ransom with widespread damaging implications. This provides huge scope for IT security specialists to sharpen their skillset to overcome this new challenge. This paper covers the broad cloud architecture, current inherent cloud threat mechanisms, ransomware vulnerabilities posed and suggested methods to mitigate it.
In the existing remote data integrity checking schemes, dynamic update operates on block level, which usually restricts the location of the data inserted in a file due to the fixed size of a data block. In this paper, we propose a remote data integrity checking scheme with fine-grained update for big data storage. The proposed scheme achieves basic operations of insertion, modification, deletion on line level at any location in a file by designing a mapping relationship between line level update and block level update. Scheme analysis shows that the proposed scheme supports public verification and privacy preservation. Meanwhile, it performs data integrity checking with low computation and communication cost.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
Over the years cybercriminals have misused the Domain Name System (DNS) - a critical component of the Internet - to gain profit. Despite this persisting trend, little empirical information about the security of Top-Level Domains (TLDs) and of the overall 'health' of the DNS ecosystem exists. In this paper, we present security metrics for this ecosystem and measure the operational values of such metrics using three representative phishing and malware datasets. We benchmark entire TLDs against the rest of the market. We explicitly distinguish these metrics from the idea of measuring security performance, because the measured values are driven by multiple factors, not just by the performance of the particular market player. We consider two types of security metrics: occurrence of abuse and persistence of abuse. In conjunction, they provide a good understanding of the overall health of a TLD. We demonstrate that attackers abuse a variety of free services with good reputation, affecting not only the reputation of those services, but of entire TLDs. We find that, when normalized by size, old TLDs like .com host more bad content than new generic TLDs. We propose a statistical regression model to analyze how the different properties of TLD intermediaries relate to abuse counts. We find that next to TLD size, abuse is positively associated with domain pricing (i.e. registries who provide free domain registrations witness more abuse). Last but not least, we observe a negative relation between the DNSSEC deployment rate and the count of phishing domains.
With the repaid growth of social tagging users, it becomes very important for social tagging systems how the required resources are recommended to users rapidly and accurately. Firstly, the architecture of an agent-based intelligent social tagging system is constructed using agent technology. Secondly, the design and implementation of user interest mining, personalized recommendation and common preference group recommendation are presented. Finally, a self-adaptive recommendation strategy for social tagging and its implementation are proposed based on the analysis to the shortcoming of the personalized recommendation strategy and the common preference group recommendation strategy. The self-adaptive recommendation strategy achieves equilibrium selection between efficiency and accuracy, so that it solves the contradiction between efficiency and accuracy in the personalized recommendation model and the common preference recommendation model.
Many innovations in the field of cryptography have been made in recent decades, ensuring the confidentiality of the message's content. However, sometimes it's not enough to secure the message, and communicating parties need to hide the fact of the presence of any communication. This problem is solved by covert channels. A huge number of ideas and implementations of different types of covert channels was proposed ever since the covert channels were mentioned for the first time. The spread of the Internet and networking technologies was the reason for the use of network protocols for the invention of new covert communication methods and has led to the emergence of a new class of threats related to the data leakage via network covert channels. In recent years, web applications, such as web browsers, email clients and web messengers have become indispensable elements in business and everyday life. That's why ubiquitous HTTP messages are so useful as a covert information containers. The use of HTTP for the implementation of covert channels may increase the capacity of covert channels due to HTTP's flexibility and wide distribution as well. We propose a detailed analysis of all known HTTP covert channels and techniques of their detection and capacity limitation.
Cloud computing has emerged as a compelling vision for managing data and delivering query answering capability over the internet. This new way of computing also poses a real risk of disclosing confidential information to the cloud. Searchable encryption addresses this issue by allowing the cloud to compute the answer to a query based on the cipher texts of data and queries. Thanks to its inner product preservation property, the asymmetric scalar-product-preserving encryption (ASPE) has been adopted and enhanced in a growing number of works toperform a variety of queries and tasks in the cloud computingsetting. However, the security property of ASPE and its enhancedschemes has not been studied carefully. In this paper, we show acomplete disclosure of ASPE and several previously unknownsecurity risks of its enhanced schemes. Meanwhile, efficientalgorithms are proposed to learn the plaintext of data and queriesencrypted by these schemes with little or no knowledge beyondthe ciphertexts. We demonstrate these risks on real data sets.
Software Defined Networking (SDN) presents a unique opportunity to manage and orchestrate cloud networks. The educational institutions, like many other industries face a lot of security threats. We have established an SDN enabled Demilitarized Zone (DMZ) — Science DMZ to serve as testbed for securing ASU Internet2 environment. Science DMZ allows researchers to conduct in-depth analysis of security attacks and take necessary countermeasures using SDN based command and control (C&C) center. Demo URL: https : //www.youtube.corn/watchlv = 8yo2lTNV 3r4.
Science is conducted collaboratively, often requiring knowledge sharing about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs) or Digital Object Identifiers (DOIs). An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. In this paper, we present the sciunit, a reusable research object in which aggregated content is recomputable. We describe a Git-like client that efficiently creates, stores, and repeats sciunits. We show through analysis that sciunits repeat computational experiments with minimal storage and processing overhead. Finally, we provide an overview of sharing and reproducible cyberinfrastructure based on sciunits gaining adoption in the domain of geosciences.
SDN is a new network architecture for control and data forwarding logic separation, able to provide a high degree of openness and programmability, with many advantages not available by traditional networks. But there are still some problems unsolved, for example, it is easy to cause the controller to be attacked due to the lack of verifying the source of the packet, and the limited range of match fields cannot meet the requirement of the precise control of network services etc. Aiming at the above problems, this paper proposes a SDN network security control forwarding mechanism based on cipher identification, when packets flow into and out of the network, the forwarding device must verify their source to ensure the user's non-repudiation and the authenticity of packets. Besides administrators control the data forwarding based on cipher identification, able to form network management and control capabilities based on human, material, business flow, and provide a new method and means for the future of Internet security.
Distributed Denial of Service (DDoS) attack has been bringing serious security concerns on banks, finance incorporation, public institutions, and data centers. Also, the emerging wave of Internet of Things (IoT) raises new concerns on the smart devices. Software Defined Networking (SDN) and Network Functions Virtualization (NFV) have provided a new paradigm for network security. In this paper, we propose a new method to efficiently prevent DDoS attacks, based on a SDN/NFV framework. To resolve the problem that normal packets are blocked due to the inspection on suspicious packets, we developed a threshold-based method that provides a client with an efficient, fast DDoS attack mitigation. In addition, we use open source code to develop the security functions in order to implement our solution for SDN-based network security functions. The source code is based on NETCONF protocol [1] and YANG Data Model [2].