Biblio
Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
The Domain Name System (DNS) is widely seen as a vital protocol of the modern Internet. For example, popular services like load balancers and Content Delivery Networks heavily rely on DNS. Because of its important role, DNS is also a desirable target for malicious activities such as spamming, phishing, and botnets. To protect networks against these attacks, a number of DNS-based security approaches have been proposed. The key insight of our study is to measure the effectiveness of security approaches that rely on DNS in large-scale networks. For this purpose, we answer the following questions, How often is DNS used? Are most of the Internet flows established after contacting DNS? In this study, we collected data from the University of Auckland campus network with more than 33,000 Internet users and processed it to find out how DNS is being used. Moreover, we studied the flows that were established with and without contacting DNS. Our results show that less than 5 percent of the observed flows use DNS. Therefore, we argue that those security approaches that solely depend on DNS are not sufficient to protect large-scale networks.
The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.
We consider several challenging problems in complex networks (communication, control, social, economic, biological, hybrid) as problems in cooperative multi-agent systems. We describe a general model for cooperative multi-agent systems that involves several interacting dynamic multigraphs and identify three fundamental research challenges underlying these systems from a network science perspective. We show that the framework of constrained coalitional network games captures in a fundamental way the basic tradeoff of benefits vs. cost of collaboration, in multi-agent systems, and demonstrate that it can explain network formation and the emergence or not of collaboration. Multi-metric problems in such networks are analyzed via a novel multiple partially ordered semirings approach. We investigate the interrelationship between the collaboration and communication multigraphs in cooperative swarms and the role of the communication topology, among the collaborating agents, in improving the performance of distributed task execution. Expander graphs emerge as efficient communication topologies for collaborative control. We relate these models and approaches to statistical physics.
This paper presents a novel architecture to manage identity and access (IAM) in a Multi-tier cloud infrastructure, in which most services are supported by massive-scale data centres over the Internet. Multi-tier cloud infrastructure uses tier-based model from Software Engineering to provide resources in different tires. In this paper we focus on design and implementation of a centralized identity and access management system for the multi-tier cloud infrastructure. First, we discuss identity and access management requirements in such an environment and propose our solution to address these requirements. Next, we discuss approaches to improve performance of the IAM system and make it scalable to billions of users. Finally, we present experimental results based on the current deployment in the SAVI Testbed. We show that our IAM system outperforms the previously proposed IAM systems for cloud infrastructure by factor 9 in throughput when the number of users is small, it handle about 50 times more requests in peak usage. Because our architecture is a combination of Green-thread and load balanced process, it uses less systems resources, and easily scales up to address high number of requests.
An improved harmony search algorithm is presented for solving continuous optimization problems in this paper. In the proposed algorithm, an elimination principle is developed for choosing from the harmony memory, so that the harmonies with better fitness will have more opportunities to be selected in generating new harmonies. Two key control parameters, pitch adjustment rate (PAR) and bandwidth distance (bw), are dynamically adjusted to favor exploration in the early stages and exploitation during the final stages of the search process with the different search spaces of the optimization problems. Numerical results of 12 benchmark problems show that the proposed algorithm performs more effectively than the existing HS variants in finding better solutions.
Infrastructure-based Vehicular Networks can be applied in different social contexts, such as health care, transportation and entertainment. They can easily take advantage of the benefices provided by wireless mesh networks (WMNs) to mobility, since WMNs essentially support technological convergence and resilience, required for the effective operation of services and applications. However, infrastructure-based vehicular networks are prone to attacks such as ARP packets flooding that compromise mobility management and users' network access. Hence, this work proposes MIRF, a secure mobility scheme based on reputation and filtering to mitigate flooding attacks on mobility management. The efficiency of the MIRF scheme has been evaluated by simulations considering urban scenarios with and without attacks. Analyses show that it significantly improves the packet delivery ratio in scenarios with attacks, mitigating their intentional negative effects, as the reduction of malicious ARP requests. Furthermore, improvements have been observed in the number of handoffs on scenarios under attacks, being faster than scenarios without the scheme.
The Internet of Things (IoT) is here, more than 10 billion units are already connected and five times more devices are expected to be deployed in the next five years. Technological standarization and the management and fostering of rapid innovation by governments are among the main challenges of the IoT. However, security and privacy are the key to make the IoT reliable and trusted. Security mechanisms for the IoT should provide features such as scalability, interoperability and lightness. This paper addresses authentication and access control in the frame of the IoT. It presents Physical Unclonable Functions (PUF), which can provide cheap, secure, tamper-proof secret keys to authentify constrained M2M devices. To be successfully used in the IoT context, this technology needs to be embedded in a standardized identity and access management framework. On the other hand, Embedded Subscriber Identity Module (eSIM) can provide cellular connectivity with scalability, interoperability and standard compliant security protocols. The paper discusses an authorization scheme for a constrained resource server taking advantage of PUF and eSIM features. Concrete IoT uses cases are discussed (SCADA and building automation).
Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
Ad hoc networks represent a very modern technology for providing communication between devices without the need of any prior infrastructure set up, and thus in an “on the spot” manner. But there is a catch: so far there isn't any security scheme that would suit the ad hoc properties of this type of networks and that would also accomplish the needed security objectives. The most promising proposals are the self-organized schemes. This paper presents a work in progress aiming at developing a new self-organized key management scheme that uses identity based cryptography for making impossible some of the attacks that can be performed over the schemes proposed so far, while preserving their advantages. The paper starts with a survey of the most important self-organized key management schemes and a short analysis of the advantages and disadvantages they have. Then, it presents our new scheme, and by using informal analysis, it presents the advantages it has over the other proposals.
Ad hoc networks represent a very modern technology for providing communication between devices without the need of any prior infrastructure set up, and thus in an “on the spot” manner. But there is a catch: so far there isn't any security scheme that would suit the ad hoc properties of this type of networks and that would also accomplish the needed security objectives. The most promising proposals are the self-organized schemes. This paper presents a work in progress aiming at developing a new self-organized key management scheme that uses identity based cryptography for making impossible some of the attacks that can be performed over the schemes proposed so far, while preserving their advantages. The paper starts with a survey of the most important self-organized key management schemes and a short analysis of the advantages and disadvantages they have. Then, it presents our new scheme, and by using informal analysis, it presents the advantages it has over the other proposals.
In this paper, we inspire from two analogies: the warfare kill zone and the airport check-in system, to tackle the issue of spam botnet detection. We add a new line of defense to the defense-in-depth model called the third line. This line is represented by a security framework, named the Spam Trapping System (STS) and adopts the prevent-then-detect approach to fight against spam botnets. The framework exploits the application sandboxing principle to prevent the spam from going out of the host and detect the corresponding malware bot. We show that the proposed framework can ensure better security against malware bots. In addition, an analytical study demonstrates that the framework offers optimal performance in terms of detection time and computational cost in comparison to intrusion detection systems based on static and dynamic analysis.
Cloud technologies are increasingly important for IT department for allowing them to concentrate on strategy as opposed to maintaining data centers; the biggest advantages of the cloud is the ability to share computing resources between multiple providers, especially hybrid clouds, in overcoming infrastructure limitations. User identity federation is considered as the second major risk in the cloud, and since business organizations use multiple cloud service providers, IT department faces a range of constraints. Multiple attempts to solve this problem have been suggested like federated Identity, which has a number of advantages, despite it suffering from challenges that are common in new technologies. The following paper tackles federated identity, its components, advantages, disadvantages, and then proposes a number of useful scenarios to manage identity in hybrid clouds infrastructure.
Due to the development of cloud computing and NoSQL database, more and more sensitive information are stored in NoSQL databases, which exposes quite a lot security vulnerabilities. This paper discusses security features of MongoDB database and proposes a transparent middleware implementation. The analysis of experiment results show that this transparent middleware can efficiently encrypt sensitive data specified by users on a dataset level. Existing application systems do not need too many modifications in order to apply this middleware.
In this paper, we propose a scheme to employ an asymmetric fingerprinting protocol within a client-side embedding distribution framework. The scheme is based on a novel client-side embedding technique that is able to transmit a binary fingerprint. This enables secure distribution of personalized decryption keys containing the Buyer's fingerprint by means of existing asymmetric protocols, without using a trusted third party. Simulation results show that the fingerprint can be reliably recovered by using non-blind decoding, and it is robust with respect to common attacks. The proposed scheme can be a valid solution to both customer's rights and scalability issues in multimedia content distribution.
Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.
Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.
Optimizing memory access is critical for performance and power efficiency. CPU manufacturers have developed sampling-based performance measurement units (PMUs) that report precise costs of memory accesses at specific addresses. However, this data is too low-level to be meaningfully interpreted and contains an excessive amount of irrelevant or uninteresting information. We have developed a method to gather fine-grained memory access performance data for specific data objects and regions of code with low overhead and attribute semantic information to the sampled memory accesses. This information provides the context necessary to more effectively interpret the data. We have developed a tool that performs this sampling and attribution and used the tool to discover and diagnose performance problems in real-world applications. Our techniques provide useful insight into the memory behaviour of applications and allow programmers to understand the performance ramifications of key design decisions: domain decomposition, multi-threading, and data motion within distributed memory systems.
With the rapid development of information technology, information security management is ever more important. OpenSSL security incident told us, there's distinct disadvantages of security management of current hierarchical structure, the software and hardware facilities are necessary to enforce security management on their implements of crucial basic protocols, in order to ease the security threats against the facilities in a certain extent. This article expounded cross-layer security management and enumerated 5 contributory factors for the core problems that management facing to.
In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way.
We propose a resilience architecture for improving the security and dependability of authentication and authorization infrastructures, in particular the ones based on RADIUS and OpenID. This architecture employs intrusion-tolerant replication, trusted components and entrusted gateways to provide survivable services ensuring compatibility with standard protocols. The architecture was instantiated in two prototypes, one implementing RADIUS and another implementing OpenID. These prototypes were evaluated in fault-free executions, under faults, under attack, and in diverse computing environments. The results show that, beyond being more secure and dependable, our prototypes are capable of achieving the performance requirements of enterprise environments, such as IT infrastructures with more than 400k users.
Smart objects are small devices with limited system resources, typically made to fulfill a single simple task. By connecting smart objects and thus forming an Internet of Things, the devices can interact with each other and their users and support a new range of applications. Due to the limitations of smart objects, common security mechanisms are not easily applicable. Small message sizes and the lack of processing power severely limit the devices' ability to perform cryptographic operations. This paper introduces a protocol for delegating client authentication and authorization in a constrained environment. The protocol describes how to establish a secure channel based on symmetric cryptography between resource-constrained nodes in a cross-domain setting. A resource-constrained node can use this protocol to delegate authentication of communication peers and management of authorization information to a trusted host with less severe limitations regarding processing power and memory.
Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.
This paper reports the results and findings of a historical analysis of open source intelligence (OSINT) information (namely Twitter data) surrounding the events of the September 11, 2012 attack on the US Diplomatic mission in Benghazi, Libya. In addition to this historical analysis, two prototype capabilities were combined for a table top exercise to explore the effectiveness of using OSINT combined with a context aware handheld situational awareness framework and application to better inform potential responders as the events unfolded. Our experience shows that the ability to model sentiment, trends, and monitor keywords in streaming social media, coupled with the ability to share that information to edge operators can increase their ability to effectively respond to contingency operations as they unfold.