Biblio
Community structure detection in social networks has become a big challenge. Various methods in the literature have been presented to solve this challenge. Recently, several methods have also been proposed to solve this challenge based on a mapping-reduction model, in which data and algorithms are divided between different process nodes so that the complexity of time and memory of community detection in large social networks is reduced. In this paper, a mapping-reduction model is first proposed to detect the structure of communities. Then the proposed framework is rewritten according to a new mechanism called distributed cache memory; distributed cache memory can store different values associated with different keys and, if necessary, put them at different computational nodes. Finally, the proposed rewritten framework has been implemented using SPARK tools and its implementation results have been reported on several major social networks. The performed experiments show the effectiveness of the proposed framework by varying the values of various parameters.
Big data processing systems are becoming increasingly more present in cloud workloads. Consequently, they are starting to incorporate more sophisticated mechanisms from traditional database and distributed systems. We focus in this work on the use of caching policies, which for big data raise important new challenges. Not only they must respond to new variants of the trade-off between hit rate, response time, and the space consumed by the cache, but they must do so at possibly higher volume and velocity than web and database workloads. Previous caching policies have not been tested experimentally with big data workloads. We address these challenges in this work. We propose the Read Density family of policies, which is a principled approach to quantify the utility of cached objects through a family of utility functions that depend on the frequency of reads of an object. We further design the Approximate Histogram, which is a policy-based technique based on an array of counters. This technique promises to achieve runtime-space efficient computation of the metric required by the cache policy. We evaluate through trace-based simulation the caching policies from the Read Density family, and compare them with over ten state-of-the-art alternatives. We use two workload traces representative for big data processing, collected from commercial Spark and MapReduce deployments. While we achieve comparable performance to the state-of-art with less parameters, meaningful performance improvement for big data workloads remain elusive.
Traditional power grid security schemes are being replaced by highly advanced and efficient smart security schemes due to the advancement in grid structure and inclusion of cyber control and monitoring tools. Smart attackers create physical, cyber, or cyber-physical attacks to gain the access of the power system and manipulate/override system status, measurements and commands. In this paper, we formulate the environment for the attacker-defender interaction in the smart power grid. We provide a strategic analysis of the attacker-defender strategic interaction using a game theoretic approach. We apply repeated game to formulate the problem, implement it in the power system, and investigate for optimal strategic behavior in terms of mixed strategies of the players. In order to define the utility or cost function for the game payoffs calculation, generation power is used. Attack-defense budget is also incorporated with the attacker-defender repeated game to reflect a more realistic scenario. The proposed game model is validated using IEEE 39 bus benchmark system. A comparison between the proposed game model and the all monitoring model is provided to validate the observations.
In this paper, we develop a statistical framework for image steganography in which the cover and stego messages are modeled as multivariate Gaussian random variables. By minimizing the detection error of an optimal detector within the generalized adopted statistical model, we propose a novel Gaussian embedding method. Furthermore, we extend the formulation to cost-based steganography, resulting in a universal embedding scheme that works with embedding costs as well as variance estimators. Experimental results show that the proposed approach avoids embedding in smooth regions and significantly improves the security of the state-of-the-art methods, such as HILL, MiPOD, and S-UNIWARD.
The Internet has gradually penetrated into the national economy, politics, culture, military, education and other fields. Due to its openness, interconnectivity and other characteristics, the Internet is vulnerable to all kinds of malicious attacks. The research uses a honeynet to collect attacker information, and proposes a network penetration recognition technology based on interactive behavior analysis. Using Sebek technology to capture the attacker's keystroke record, time series modeling of the keystroke sequences of the interaction behavior is proposed, using a Recurrent Neural Network. The attack recognition method is constructed by using Long Short-Term Memory that solves the problem of gradient disappearance, gradient explosion and long-term memory shortage in ordinary Recurrent Neural Network. Finally, the experiment verifies that the short-short time memory network has a high accuracy rate for the recognition of penetration attacks.
In today's interconnected world, universities recognize the importance of protecting their information assets from internal and external threats. Being the possible insider threats to Information Security, employees are often coined as the weakest link. Both employees and organizations should be aware of this raising challenge. Understanding staff perception of compliance behaviour is critical for universities wanting to leverage their staff capabilities to mitigate Information Security risks. Therefore, this research seeks to get insights into staff perception based on factors adopted from several theories by using proposed constructs i.e. "perceived" practices/policies and "perceived" intention to comply. Drawing from the General Deterrence Theory, Protection Motivation Theory, Theory of Planned Behaviour and Information Reinforcement, within the context of Palestine universities, this paper integrates staff awareness of Information Security Policies (ISP) countermeasures as antecedents to ``perceived'' influencing factors (perceived sanctions, perceived rewards, perceived coping appraisal, and perceived information reinforcement). The empirical study is designed to follow a quantitative research approaches, use survey as a data collection method and questionnaires as the research instruments. Partial least squares structural equation modelling is used to inspect the reliability and validity of the measurement model and hypotheses testing for the structural model. The research covers ISP awareness among staff and seeks to assert that information security is the responsibility of all academic and administrative staff from all departments. Overall, our pilot study findings seem promising, and we found strong support for our theoretical model.
The evaluation of fault attacks on security-critical hardware implementations of cryptographic primitives is an important concern. In such regards, we have created a framework for automated construction of fault attacks on hardware realization of ciphers. The framework can be used to quickly evaluate any cipher implementations, including any optimisations. It takes the circuit description of the cipher and the fault model as input. The output of the framework is a set of algebraic equations, such as conjunctive normal form (CNF) clauses, which is then fed to a SAT solver. We consider both attacking an actual implementation of a cipher on an field-programmable gate array (FPGA) platform using a fault injector and the evaluation of an early design of the cipher using idealized fault models. We report the successful application of our hardware-oriented framework to a collection of ciphers, including the advanced encryption standard (AES), and the lightweight block ciphers LED and PRESENT. The corresponding results and a discussion of the impact to different fault models on our framework are shown. Moreover, we report significant improvements compared to similar frameworks, such as speedups or more advanced features. Our framework is the first algebraic fault attack (AFA) tool to evaluate the state-of-the art cipher LED-64, PRESENT and full-scale AES using only hardware-oriented structural cipher descriptions.
Methods for implementing integer arithmetic operations of addition, subtraction, and multiplication in the system of residual classes are considered. It is shown that their practical use in computer systems can significantly improve the performance of the implementation of arithmetic operations. A new method has been developed for raising numbers represented in the system of residual classes to an arbitrary power of a natural number, both in positive and in negative number ranges. An example of the implementation of the proposed method for the construction of numbers represented in the system of residual classes for the value of degree k = 2 is given.
As opposed to a traditional power grid, a smart grid can help utilities to save energy and therefore reduce the cost of operation. It also increases reliability of the system In smart grids the quality of monitoring and control can be adequately improved by incorporating computing and intelligent communication knowledge. However, this exposes the system to false data injection (FDI) attacks and the system becomes vulnerable to intrusions. Therefore, it is important to detect such false data injection attacks and provide an algorithm for the protection of system against such attacks. In this paper a comparison between three FDI detection methods has been made. An H2 control method has then been proposed to detect and control the false data injection on a 12th order model of a smart grid. Disturbances and uncertainties were added to the system and the results show the system to be fully controllable. This paper shows the implementation of a feedback controller to fully detect and mitigate the false data injection attacks. The controller can be incorporated in real life smart grid operations.
We conduct formal verification of the divide and conquer key distribution scheme (DC DHKE)-a contributory group key agreement that uses a quasilinear amount of exponentiations with respect to the number of communicating parties. The verification is conducted using both ProVerif and TLA+ as tools. ProVerif is used to verify the protocol correctness as well as its security against passive attacker; while TLA+ is utilized to verify whether all participants in the protocol retrieve the mutual key simultaneously. We also verify the ING and GDH.3 protocol for comparative purposes. The verification results show that the ING, GDH.3, and DC DHKE protocols satisfy the pre-meditated correctness, security, and liveness properties. However, the GDH.3 protocol does not satisfy the liveness property stating that all participants obtain the mutual key at the same time.
Today's extensive use of Internet creates huge volumes of data by users in both client and server sides. Normally users don't want to store all the data in local as well as keep archive in the server. For some unwanted data, such as trash, cache and private data, needs to be deleted periodically. Explicit deletion could be applied to the local data, while it is a troublesome job. But there is no transparency to users on the personal data stored in the server. Since we have no knowledge of whether they're cached, copied and archived by the third parties, or sold by the service provider. Our research seeks to provide an automatic data sanitization system to make data could be self-destructing. Specifically, we give data a life cycle, which would be erased automatically when at the end of its life, and the destroyed data cannot be recovered by any effort. In this paper, we present FlashGhost, which is a system that meets this challenge through a novel integration of cryptography techniques with the frequent colliding hash table. In this system, data will be unreadable and rendered unrecoverable by overwriting multiple times after its validity period has expired. Besides, the system reliability is enhanced by threshold cryptography. We also present a mathematical model and verify it by a number of experiments, which demonstrate theoretically and experimentally our system is practical to use and meet the data auto-sanitization goal described above.
With the rapid proliferation of mobile users, the spectrum scarcity has become one of the issues that have to be addressed. Cognitive Radio technology addresses this problem by allowing an opportunistic use of the spectrum bands. In cognitive radio networks, unlicensed users can use licensed channels without causing harmful interference to licensed users. However, cognitive radio networks can be subject to different security threats which can cause severe performance degradation. One of the main attacks on these networks is the primary user emulation in which a malicious node emulates the characteristics of the primary user signals. In this paper, we propose a detection technique of this attack based on the RSS-based localization with the maximum likelihood estimation. The simulation results show that the proposed technique outperforms the RSS-based localization method in detecting the primary user emulation attacker.
The number of sensors and embedded devices in an urban area can be on the order of thousands. New low-power wide area (LPWA) wireless network technologies have been proposed to support this large number of asynchronous, low-bandwidth devices. Among them, the Cooperative UltraNarrowband (C-UNB) is a clean-slate cellular network technology to connect these devices to a remote site or data collection server. C-UNB employs small bandwidth channels, and a lightweight random access protocol. In this paper, a new application is investigated - the use of C-UNB wireless networks to support the Advanced Metering Infrastructure (AMI), in order to facilitate the communication between smart meters and utilities. To this end, we adapted a mathematical model for C-UNB, and implemented a network simulation module in NS-3 to represent C-UNB's physical and medium access control layer. For the application layer, we implemented the DLMS-COSEM protocol, or Device Language Message Specification - Companion Specification for Energy Metering. Details of the simulation module are presented and we conclude that it supports the results of the mathematical model.
Peer-to-peer computing (P2P) refers to the famous technology that provides peers an equal spontaneous collaboration in the network by using appropriate information and communication systems without the need for a central server coordination. Today, the interconnection of several P2P networks has become a genuine solution for increasing system reliability, fault tolerance and resource availability. However, the existence of security threats in such networks, allows us to investigate the safety of users from P2P threats by studying the effects of competition between these interconnected networks. In this paper, we present an e-epidemic model to characterize the worm propagation in an interconnected peer-to-peer network. Here, we address this issue by introducing a model of network competition where an unprotected network is willing to partially weaken its own safety in order to more severely damage a more protected network. The unprotected network can infect all peers in the competitive networks after their non react against the passive worm propagation. Our model also evaluated the effect of an immunization strategies adopted by the protected network to resist against attacking networks. The launch time of immunization strategies in the protected network, the number of peers synapse connected to the both networks, and other effective parameters have also been investigated in this paper.
Fourier domain mode locked (FDML) lasers, in which the sweep period of the swept bandpass filter is synchronized with the roundtrip time of the optical field, are broadband and rapidly tunable fiber ring laser systems, which offer rich dynamics. A detailed understanding is important from a fundamental point of view, and also required in order to improve current FDML lasers which have not reached their coherence limit yet. Here, we study the formation of localized patterns in the intensity trace of FDML laser systems based on a master equation approach [1] derived from the nonlinear Schrödinger equation for polarization maintaining setups, which shows excellent agreement with experimental data. A variety of localized patterns and chaotic or bistable operation modes were previously discovered in [2–4] by investigating primarily quasi-static regimes within a narrow sweep bandwidth where a delay differential equation model was used. In particular, the formation of so-called holes which are characterized by a dip in the intensity trace and a rapid phase jump are described. Such holes have tentatively been associated with Nozaki-Bekki holes which are solutions to the complex Ginzburg-Landau equation. In Fig. 1 (b) to (d) small sections of a numerical solution of our master equation are presented for a partially dispersion compensated polarization maintaining FDML laser setup. Within our approach, we are able to study the full sweep dynamics over a broad sweep range of more than 100 nm. This allows us to identify different co-existing intensity patterns within a single sweep. In general, high frequency distortions in the intensity trace of FDML lasers [5] are mainly caused by synchronization mismatches caused by the fiber dispersion or a detuning of the roundtrip time of the optical field to the sweep period of the swept bandpass filter. This timing errors lead to rich and complex dynamics over many roundtrips and are a major source of noise, greatly affecting imaging and sensing applications. For example, the imaging quality in optical coherence tomography where FDML lasers are superior sources is significantly reduced [5].