Biblio
This paper proposes a novel architecture for module partitioning problems in the process of dynamic and partial reconfigurable computing in VLSI design automation. This partitioning issue is deemed as Hypergraph replica. This can be treated by a probabilistic algorithm like the Markov chain through the transition probability matrices due to non-deterministic polynomial complete problems. This proposed technique has two levels of implementation methodology. In the first level, the combination of parallel processing of design elements and efficient pipelining techniques are used. The second level is based on the genetic algorithm optimization system architecture. This proposed methodology uses the hardware/software co-design and co-verification techniques. This architecture was verified by implementation within the MOLEN reconfigurable processor and tested on a Xilinx Virtex-5 based development board. This proposed multi-objective module partitioning design was experimentally evaluated using an ISPD’98 circuit partitioning benchmark suite. The efficiency and throughput were compared with that of the hMETIS recursive bisection partitioning approach. The results indicate that the proposed method can improve throughput and efficiency up to 39 times with only a small amount of increased design space. The proposed architecture style is sketched out and concisely discussed in this manuscript, and the existing results are compared and analyzed.
Sensors are indispensable components of modern plants and processes and their reliability is vital to ensure reliable and safe operation of complex systems. In this paper, the problem of design and development of a data-driven Multiple Sensor Fault Detection and Isolation (MSFDI) algorithm for nonlinear processes is investigated. The proposed scheme is based on an evolving multi-Takagi Sugeno framework in which each sensor output is estimated using a model derived from the available input/output measurement data. Our proposed MSFDI algorithm is applied to Continuous-Flow Stirred-Tank Reactor (CFSTR). Simulation results demonstrate and validate the performance capabilities of our proposed MSFDI algorithm.
Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.
In this paper, we define a new homomorphic signature for identity management in mobile cloud computing. A mobile user firstly computes a full signature on all his sensitive personal information (SPI), and stores it in a trusted third party (TTP). During the valid period of his full signature, if the user wants to call a cloud service, he should authenticate him to the cloud service provider (CSP) through TTP. In our scheme, the mobile user only needs to send a vector to the access controlling server (TTP). The access controlling server who doesnʼt know the secret key can compute a partial signature on a small part of userʼs SPI, and then sends it to the CSP. We give a formal secure definition of this homomorphic signature, and construct a scheme from GHR signature. We prove that our scheme is secure under GHR signature.
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
The need to keep an attacker oblivious of an attack mitigation effort is a very important component of a defense against denial of services (DoS) and distributed denial of services (DDoS) attacks because it helps to dissuade attackers from changing their attack patterns. Conceptually, DDoS mitigation can be achieved by two components. The first is a decoy server that provides a service function or receives attack traffic as a substitute for a legitimate server. The second is a decoy network that restricts attack traffic to the peripheries of a network, or which reroutes attack traffic to decoy servers. In this paper, we propose the use of a two-stage map table extension Locator/ID Separation Protocol (LISP) to realize a decoy network. We also describe and demonstrate how LISP can be used to implement an oblivious DDoS mitigation mechanism by adding a simple extension on the LISP MapServer. Together with decoy servers, this method can terminate DDoS traffic on the ingress end of an LISP-enabled network. We verified the effectiveness of our proposed mechanism through simulated DDoS attacks on a simple network topology. Our evaluation results indicate that the mechanism could be activated within a few seconds, and that the attack traffic can be terminated without incurring overhead on the MapServer.
The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountability-related properties from the literature. Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utility-theoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model. In our use of this framework, we define notions of "open" and "closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.
We address a discrete-time LQG control problem over a fixed performance window and apply a receding-horizon type control strategy, resulting in an exact solution to the problem in terms of semidefinite programming. The systems considered take parameters from a finite set, and switch between them according to an automaton. The controller has a finite preview of future parameters, beyond which only the set of parameters is known. We provide necessary and sufficient convex con- ditions for the existence of a controller which guarantees both exponential stability and finite-horizon performance levels for the system; the performance levels may differ according to the particular parameter sequence within the performance window. A simple, physics-based example is provided to illustrate the main results.
Modeling and evaluating the performance of large-scale wireless sensor networks (WSNs) is a challenging problem. The traditional method for representing the global state of a system as a cross product of the states of individual nodes in the system results in a state space whose size is exponential in the number of nodes. We propose an alternative way of representing the global state of a system: namely, as a probability mass function (pmf) which represents the fraction of nodes in different states. A pmf corresponds to a point in a Euclidean space of possible pmf values, and the evolution of the state of a system is represented by trajectories in this Euclidean space. We propose a novel performance evaluation method that examines all pmf trajectories in a dense Euclidean space by exploring only finite relevant portions of the space. We call our method Euclidean model checking. Euclidean model checking is useful both in the design phase—where it can help determine system parameters based on a specification—and in the evaluation phase—where it can help verify performance properties of a system. We illustrate the utility of Euclidean model checking by using it to design a time difference of arrival (TDoA) distance measurement protocol and to evaluate the protocol’s implementation on a 90-node WSN. To facilitate such performance evaluations, we provide a Markov model estimation method based on applying a standard statistical estimation technique to samples resulting from the execution of a system.
This paper presents a model for generating personalized passwords (i.e., passwords based on user and service profile). A user's password is generated from a list of personalized words, each word is drawn from a topic relating to a user and the service in use. The proposed model can be applied to: (i) assess the strength of a password (i.e., determine how many guesses are used to crack the password), and (ii) generate secure (i.e., contains digits, special characters, or capitalized characters) yet easy to memorize passwords.
One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions.