Biblio
In this paper, we analyze the effectiveness of quantum algorithms to solve some classical computing complexities. In fact, we focus in this study on several famous quantum algorithms, where we discussed their impact on classical computing using in computer science.
The use of Knuth's Rule and Bayesian Blocks constant piecewise models for characterization of RFID traffic has been proposed already. This study presents an evaluation of the application of those two modeling techniques for various RFID traffic patterns. The data sets used in this study consist of time series of binned RFID command counts. More specifically., we compare the shape of several empirical plots of raw data sets we obtained from experimental RIFD readings., against the constant piecewise graphs produced as an output of the two modeling algorithms. One issue limiting the applicability of modeling techniques to RFID traffic is the fact that there are a large number of various RFID applications available. We consider this phenomenon to present the main motivation for this study. The general expectation is that the RFID traffic traces from different applications would be sequences with different histogram shapes. Therefore., no modeling technique could be considered universal for modeling the traffic from multiple RFID applications., without first evaluating its model performance for various traffic patterns. We postulate that differences in traffic patterns are present if the histograms of two different sets of RFID traces form visually different plot shapes.
In the context of emerging applications such as IoT, an RFID framework that can dynamically incorporate, identify, and seamlessly regulate the RFID tags is considered exciting. Earlier RFID frameworks developed using the older web technologies were limited in their ability to provide complete information about the RFID tags and their respective locations. However, the new and emerging web technologies have transformed this scenario and now framework can be developed to include all the required flexibility and security for seamless applications such as monitoring of RFID tags. This paper revisits and proposes a generic scenario of an RFID framework built using latest web technology and demonstrates its ability to customize using an application for tracking of personal user objects. This has been shown that the framework based on newer web technologies can be indeed robust, uniform, unified, and integrated.
Approaches for the automatic analysis of security policies on source code level cannot trivially be applied to binaries. This is due to the lacking high-level semantics of low-level object code, and the fundamental problem that control-flow recovery from binaries is difficult. We present a novel approach to recover the control-flow of binaries that is both safe and efficient. The key idea of our approach is to use the information contained in security mechanisms to approximate the targets of computed branches. To achieve this, we first define a restricted control transition intermediate language (RCTIL), which restricts the number of possible targets for each branch to a finite number of given targets. Based on this intermediate language, we demonstrate how a safe model of the control flow can be recovered without data-flow analyses. Our evaluation shows that that makes our solution more efficient than existing solutions.
There are two types of network architectures are presents those are wired network and wireless network. MANETs is one of the examples of wireless network. Each and every network has their own features which make them different from other types of network. Some of the features of MANETs are; infrastructure less network, mobility, dynamic network topology which make it different and more popular from wired network but these features also generate different problems for achieving security due to the absence of centralized authority inside network as well as sending of data due to its mobility features. Achieving security in wired network is little-bit easy compare to MANETs because in wired network user need to just protect main centralized authority for achieving security whereas in MANETs there is no centralized authority available so protecting server in MANETs is difficult compare to wired network. Data sending and receiving process is also easy in wired network but mobility features makes this data sending and receiving process difficult in MANETs. Protecting server or central repository without making use of secrete sharing in wired network will create so many challenges and problem in terms of security. The proposed system makes use of Secrete sharing method to protect server from malicious nodes and `A New particle Swarm Optimization Method for MANETs' (NPSOM) for performing data sending and receiving operation in optimization way. NPSOM technique get equated with the steady particle swarm optimizer (PSO) technique. PSO was essentially designed by Kennedy, Eberhart in 1995. These methods are based upon 4 dissimilar types of parameters. These techniques were encouraged by common performance of animals, some of them are bird assembling and fish tuition, ant colony. The proposed system converts this PSO in the form of MANETs where Particle is nothing but the nodes in the network, Swarm means collection of multiple nodes and Optimization means finding the best and nearer root to reach to destination. Each and every element study about their own previous best solution which they are having with them for the given optimization problem, likewise they see for the groups previous best solution which they got for the same problem and finally they correct its solution depending on these values. This same process gets repeated for finding of the best and optimal solutions value. NPSOM technique, used in proposed system there every element changes its location according to the solution which they got previously and which is poorest as well as their collection's earlier poorest solution for finding best, optimal value. In this proposed system we are concentrating on, sidestepping element's and collections poorest solution which they got before.
Observing semantic dependencies in large and heterogeneous networks is a critical task, since it is quite difficult to find the actual source of a malfunction in the case of an error. Dependencies might exist between many network nodes and among multiple hops in paths. If those dependency structures are unknown, debugging errors gets quite difficult. Since CPS and other large networks change at runtime and consists of custom software and hardware, as well as components off-the-shelf, it is necessary to be able to not only include own components in approaches to detect dependencies between nodes. In this paper we present an extension to the Information Flow Monitor approach. Our goal is that this approach should be able to handle unalterable blackbox nodes. This is quite challenging, since the IFM originally requires each network node to be compliant with the IFM protocol.
The Cloud computing in simple terms is storing and accessing data through internet. The data stored in the cloud is managed by cloud service providers. Storing data in cloud saves users time and memory. But once user stores data in cloud, he loses the control over his data. Hence there must be some security issues to be handled to keep users data safely in the cloud. In this work, we projected a secure auditing system using Third Party Auditor (TPA). We used Advanced Encryption Standard (AES) algorithm for encrypting user's data and Secure Hash Algorithm (SHA-2) to compute message digest. The system is executed in Amazon EC2 cloud by creating windows server instance. The results obtained demonstrates that our proposed work is safe and takes a firm time to audit the files.
Many customers ranked cloud security as a major challenge that threaten their work and reduces their trust on cloud service's provider. Hence, a significant improvement is required to establish better adaptations of security measures that suit recent technologies and especially distributed architectures. Considering the meaningful recorded data in cloud generated log files, making analysis on them, mines insightful value about hacker's activities. It identifies malicious user behaviors and predicts new suspected events. Not only that, but centralizing log files, prevents insiders from causing damage to system. In this paper, we proposed to take away sensitive log files into a single server provider and combining both MapReduce programming and k-means on the same algorithm to cluster observed events into classes having similar features. To label unknown user behaviors and predict new suspected activities this approach considers cosine distances and deviation metrics.
The pervasive use of databases for the storage of critical and sensitive information in many organizations has led to an increase in the rate at which databases are exploited in computer crimes. While there are several techniques and tools available for database forensic analysis, such tools usually assume an apriori database preparation, such as relying on tamper-detection software to already be in place and the use of detailed logging. Further, such tools are built-in and thus can be compromised or corrupted along with the database itself. In practice, investigators need forensic and security audit tools that work on poorlyconfigured systems and make no assumptions about the extent of damage or malicious hacking in a database.In this paper, we present our database forensics methods, which are capable of examining database content from a storage (disk or RAM) image without using any log or file system metadata. We describe how these methods can be used to detect security breaches in an untrusted environment where the security threat arose from a privileged user (or someone who has obtained such privileges). Finally, we argue that a comprehensive and independent audit framework is necessary in order to detect and counteract threats in an environment where the security breach originates from an administrator (either at database or operating system level).
Software-Defined Networking (SDN) represents a major shift from ossified hardware-based networks to programmable software-based networks. It introduces significant granularity, visibility, and flexibility into networking, but at the same time brings new security challenges. Although the research community is making progress in addressing both the opportunities in SDN and the accompanying security challenges, very few educational materials have been designed to incorporate the latest research results and engage students in learning about SDN security. In this paper, we presents our newly designed SDN security education materials, which can be used to meet the ever-increasing demand for high quality cybersecurity professionals with expertise in SDN security. The designed security education materials incorporate the latest research results in SDN security and are integrated into CloudLab, an open cloud platform, for effective hands-on learning. Through a user study, we demonstrate that students have a better understanding of SDN security after participating in these well-designed CloudLab-based security labs, and they also acquired strong research interests in SDN security.
In this paper, we propose a new randomized response algorithm that can achieve differential-privacy and utility guarantees for consumer's behaviors, and process a batch of data at each time. Firstly, differing from traditional differential private approach-es, we add randomized response noise into the behavior signa-tures matrix to achieve an acceptable utility-privacy tradeoff. Secondly, a behavior signature modeling method based on sparse coding is proposed. After some lightweight trainings us-ing the energy consumption data, the dictionary will be associat-ed with the behavior characteristics of the electric appliances. At last, through the experimental results verification, we find that our Algorithm can preserve consumer's privacy without comprising utility.
The advent of smart grids offers us the opportunity to better manage the electricity grids. One of the most interesting challenges in the modern grids is the consumer demand management. Indeed, the development in Information and Communication Technologies (ICTs) encourages the development of demand-side management systems. In this paper, we propose a distributed energy demand scheduling approach that uses minimal interactions between consumers to optimize the energy demand. We formulate the consumption scheduling as a constrained optimization problem and use game theory to solve this problem. On one hand, the proposed approach aims to reduce the total energy cost of a building's consumers. This imposes the cooperation between all the consumers to achieve the collective goal. On the other hand, the privacy of each user must be protected, which means that our distributed approach must operate with a minimal information exchange. The performance evaluation shows that the proposed approach reduces the total energy cost, each consumer's individual cost, as well as the peak to average ratio.
The rapid development of mobile networks has revolutionized the way of accessing the Internet. The exponential growth of mobile subscribers, devices and various applications frequently brings about excessive traffic in mobile networks. The demand for higher data rates, lower latency and seamless handover further drive the demand for the improved mobile network design. However, traditional methods can no longer offer cost-efficient solutions for better user quality of experience with fast time-to-market. Recent work adopts SDN in LTE core networks to meet the requirement. In these software defined LTE core networks, scalability and security become important design issues that must be considered seriously. In this paper, we propose a scalable channel security scheme for the software defined LTE core network. It applies the VxLAN for scalable tunnel establishment and MACsec for security enhancement. According to our evaluation, the proposed scheme not only enhances the security of the channel communication between different network components, but also improves the flexibility and scalability of the core network with little performance penalty. Moreover, it can also shed light on the design of the next generation cellular network.
A Mobile ad hoc Network (MANET) is a self-configure, dynamic, and non-fixed infrastructure that consists of many nodes. These nodes communicate with each other without an administrative point. However, due to its nature MANET becomes prone to many attacks such as DoS attacks. DoS attack is a severe as it prevents legitimate users from accessing to their authorised services. Monitoring, Detection, and rehabilitation (MrDR) method is proposed to detect DoS attacks. MrDR method is based on calculating different trust values as nodes can be trusted or not. In this paper, we evaluate the MrDR method which detect DoS attacks in MANET and compare it with existing method Trust Enhanced Anonymous on-demand routing Protocol (TEAP) which is also based on trust concept. We consider two factors to compare the performance of the proposed method to TEAP method: packet delivery ratio and network overhead. The results confirm that the MrDR method performs better in network performance compared to TEAP method.
Blum-Blum-Shub (BBS) is a less complex pseudorandom number generator (PRNG) that requires very large modulus and a squaring operation for the generation of each bit, which makes it computationally heavy and slow. On the other hand, the concept of elliptic curve (EC) point operations has been extended to PRNGs that prove to have good randomness properties and reduced latency, but exhibit dependence on the secrecy of point P. Given these pros and cons, this paper proposes a new BBS-ECPRNG approach such that the modulus is the product of two elliptic curve points, both primes of length, and the number of bits extracted per iteration is by binary fraction. We evaluate the algorithm performance by generating 1000 distinct sequences of 106bits each. The results were analyzed based on the overall performance of the sequences using the NIST standard statistical test suite. The average performance of the sequences was observed to be above the minimum confidence level of 99.7 percent and successfully passed all the statistical properties of randomness tests.
Supervisory Control and Data Acquisition (SCADA) systems play a critical role in the operation of large-scale distributed industrial systems. There are many vulnerabilities in SCADA systems and inadvertent events or malicious attacks from outside as well as inside could lead to catastrophic consequences. Network-based intrusion detection is a preferred approach to provide security analysis for SCADA systems due to its less intrusive nature. Data in SCADA network traffic can be generally divided into transport, operation, and content levels. Most existing solutions only focus on monitoring and event detection of one or two levels of data, which is not enough to detect and reason about attacks in all three levels. In this paper, we develop a novel edge-based multi-level anomaly detection framework for SCADA networks named EDMAND. EDMAND monitors all three levels of network traffic data and applies appropriate anomaly detection methods based on the distinct characteristics of data. Alerts are generated, aggregated, prioritized before sent back to control centers. A prototype of the framework is built to evaluate the detection ability and time overhead of it.
Edge detection is one of the most important topics of image processing. In the scenario of cloud computing, performing edge detection may also consider privacy protection. In this paper, we propose an edge detection and image segmentation scheme on an encrypted image with Sobel edge detector. We implement Gaussian filtering and Sobel operator on the image in the encrypted domain with homomorphic property. By implementing an adaptive threshold decision algorithm in the encrypted domain, we obtain a threshold determined by the image distribution. With the technique of garbled circuit, we perform comparison in the encrypted domain and obtain the edge of the image without decrypting the image in advanced. We then propose an image segmentation scheme on the encrypted image based on the detected edges. Our experiments demonstrate the viability and effectiveness of the proposed encrypted image edge detection and segmentation.