Biblio

Found 2705 results

Filters: First Letter Of Last Name is G  [Clear All Filters]
2015-05-04
Shaobu Wang, Shuai Lu, Ning Zhou, Guang Lin, Elizondo, M., Pai, M.A..  2014.  Dynamic-Feature Extraction, Attribution, and Reconstruction (DEAR) Method for Power System Model Reduction. Power Systems, IEEE Transactions on. 29:2049-2059.

In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.
 

2015-05-05
Matias, J., Garay, J., Mendiola, A., Toledo, N., Jacob, E..  2014.  FlowNAC: Flow-based Network Access Control. Software Defined Networks (EWSDN), 2014 Third European Workshop on. :79-84.

This paper presents FlowNAC, a Flow-based Network Access Control solution that allows to grant users the rights to access the network depending on the target service requested. Each service, defined univocally as a set of flows, can be independently requested and multiple services can be authorized simultaneously. Building this proposal over SDN principles has several benefits: SDN adds the appropriate granularity (fine-or coarse-grained) depending on the target scenario and flexibility to dynamically identify the services at data plane as a set of flows to enforce the adequate policy. FlowNAC uses a modified version of IEEE 802.1X (novel EAPoL-in-EAPoL encapsulation) to authenticate the users (without the need of a captive portal) and service level access control based on proactive deployment of flows (instead of reactive). Explicit service request avoids misidentifying the target service, as it could happen by analyzing the traffic (e.g. private services). The proposal is evaluated in a challenging scenario (concurrent authentication and authorization processes) with promising results.
 

Han Huang, Jun Zhang, Guanglong Xie.  2014.  RESEARCH on the future functions and MODALITY of smart grid and its key technologies. Electricity Distribution (CICED), 2014 China International Conference on. :1241-1245.

Power network is important part of national comprehensive energy resources transmission system in the way of energy security promise and the economy society running. Meanwhile, because of many industries involved, the development of grid can push national innovation ability. Nowadays, it makes the inner of smart grid flourish that material science, computer technique and information and communication technology go forward. This paper researches the function and modality of smart grid on energy, geography and technology dimensions. The analysis on the technology dimension is addressed on two aspects which are network control and interaction with customer. The mapping relationship between functions fo smart grid and eight key technologies, which are Large-capacity flexible transmission technology, DC power distribution technology, Distributed power generation technology, Large-scale energy storage technology, Real-time tracking simulation technology, Intelligent electricity application technology, The big data analysis and cloud computing technology, Wide-area situational awareness technology, is given. The research emphasis of the key technologies is proposed.
 

Conghuan Ye, Zenggang Xiong, Yaoming Ding, Jiping Li, Guangwei Wang, Xuemin Zhang, Kaibing Zhang.  2014.  Secure Multimedia Big Data Sharing in Social Networks Using Fingerprinting and Encryption in the JPEG2000 Compressed Domain. Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on. :616-621.

With the advent of social networks and cloud computing, the amount of multimedia data produced and communicated within social networks is rapidly increasing. In the mean time, social networking platform based on cloud computing has made multimedia big data sharing in social network easier and more efficient. The growth of social multimedia, as demonstrated by social networking sites such as Facebook and YouTube, combined with advances in multimedia content analysis, underscores potential risks for malicious use such as illegal copying, piracy, plagiarism, and misappropriation. Therefore, secure multimedia sharing and traitor tracing issues have become critical and urgent in social network. In this paper, we propose a scheme for implementing the Tree-Structured Harr (TSH) transform in a homomorphic encrypted domain for fingerprinting using social network analysis with the purpose of protecting media distribution in social networks. The motivation is to map hierarchical community structure of social network into tree structure of TSH transform for JPEG2000 coding, encryption and fingerprinting. Firstly, the fingerprint code is produced using social network analysis. Secondly, the encrypted content is decomposed by the TSH transform. Thirdly, the content is fingerprinted in the TSH transform domain. At last, the encrypted and fingerprinted contents are delivered to users via hybrid multicast-unicast. The use of fingerprinting along with encryption can provide a double-layer of protection to media sharing in social networks. Theory analysis and experimental results show the effectiveness of the proposed scheme.
 

2015-05-04
Zurek, E.E., Gamarra, A.M.R., Escorcia, G.J.R., Gutierrez, C., Bayona, H., Perez, R., Garcia, X..  2014.  Spectral analysis techniques for acoustic fingerprints recognition. Image, Signal Processing and Artificial Vision (STSIVA), 2014 XIX Symposium on. :1-5.

This article presents results of the recognition process of acoustic fingerprints from a noise source using spectral characteristics of the signal. Principal Components Analysis (PCA) is applied to reduce the dimensionality of extracted features and then a classifier is implemented using the method of the k-nearest neighbors (KNN) to identify the pattern of the audio signal. This classifier is compared with an Artificial Neural Network (ANN) implementation. It is necessary to implement a filtering system to the acquired signals for 60Hz noise reduction generated by imperfections in the acquisition system. The methods described in this paper were used for vessel recognition.

2015-05-05
Toshiro Yano, E., Bhatt, P., Gustavsson, P.M., Ahlfeldt, R.-M..  2014.  Towards a Methodology for Cybersecurity Risk Management Using Agents Paradigm. Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint. :325-325.

In order to deal with shortcomings of security management systems, this work proposes a methodology based on agents paradigm for cybersecurity risk management. In this approach a system is decomposed in agents that may be used to attain goals established by attackers. Threats to business are achieved by attacker's goals in service and deployment agents. To support a proactive behavior, sensors linked to security mechanisms are analyzed accordingly with a model for Situational Awareness(SA)[4].
 

2015-04-30
Kounelis, I., Baldini, G., Neisse, R., Steri, G., Tallacchini, M., Guimaraes Pereira, A..  2014.  Building Trust in the Human?Internet of Things Relationship Technology and Society Magazine, IEEE. 33:73-80.

Our vision in this paper is that agency, as the individual ability to intervene and tailor the system, is a crucial element in building trust in IoT technologies. Following up on this vision, we will first address the issue of agency, namely the individual capability to adopt free decisions, as a relevant driver in building trusted human-IoT relations, and how agency should be embedded in digital systems. Then we present the main challenges posed by existing approaches to implement this vision. We show then our proposal for a model-based approach that realizes the agency concept, including a prototype implementation.

2015-11-23
Peter Dinges, University of Illinois at Urbana-Champaign, Minas Charalambides, University of Illinois at Urbana-Champaign, Gul Agha, University of Illinois at Urbana-Champaign.  2013.  Automated Inference of Atomic Sets for Safe Concurrent Execution. 11th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering .

Atomic sets are a synchronization mechanism in which the programmer specifies the groups of data that must be ac- cessed as a unit. The compiler can check this specifica- tion for consistency, detect deadlocks, and automatically add the primitives to prevent interleaved access. Atomic sets relieve the programmer from the burden of recognizing and pruning execution paths which lead to interleaved ac- cess, thereby reducing the potential for data races. However, manually converting programs from lock-based synchroniza- tion to atomic sets requires reasoning about the program’s concurrency structure, which can be a challenge even for small programs. Our analysis eliminates the challenge by automating the reasoning. Our implementation of the anal- ysis allowed us to derive the atomic sets for large code bases such as the Java collections framework in a matter of min- utes. The analysis is based on execution traces; assuming all traces reflect intended behavior, our analysis enables safe concurrency by preventing unobserved interleavings which may harbor latent Heisenbugs.

2018-05-27
Cem Aksoylar, George K. Atia, Venkatesh Saligrama.  2013.  Compressive sensing bounds through a unifying framework for sparse models. {IEEE} International Conference on Acoustics, Speech and Signal Processing, {ICASSP} 2013, Vancouver, BC, Canada, May 26-31, 2013. :5524–5528.
2018-06-04
Gerdes, Ryan M, Winstead, Chris, Heaslip, Kevin.  2013.  CPS: an efficiency-motivated attack against autonomous vehicular transportation. Proceedings of the 29th Annual Computer Security Applications Conference. :99–108.
2015-12-02
Gul Agha, University of Illinois at Urbana-Champaign.  2013.  Euclidean Model Checking: A Scalable Method for Verifying Quantitative Properties in Probabilistic Systems. 5th International Conference on Algebraic Informatics.

In this lecture, I will focus on an alternate method for addressing the problem of large state spaces. For many purposes, it may not be necessary to consider the global state as a cross-product of the states of individual actors. We take our inspiration from statistical physics where macro properties of a system may be related to the properties of individual molecules using probability distributions on the states of the latter. Consider a simple example. Suppose associated with each state is the amount of energy a node consumes when in that state (such an associated value mapping is called the reward function of the state). Now, if we have a frequency count of the nodes in each state, we can estimate the total energy consumed by the system. This suggests a model where the global state is a vector of probability mass functions (pmfs). In the above example, the size of the vector would be 5, one element for each possible state of a node. Each element of the vector represents the probability that any node is in the particular state corresponding to entry.

This was an invited talk to the 5th International Conference on Algebraic Informatics.

2016-12-05
Rogerio de Lemos, Holger Giese, Hausi Muller, Mary Shaw, Jesper Andersson, Marin Litoiu, Bradley Schmerl, Gabriel Tamura, Norha Villegas, Thomas Vogel et al..  2013.  Software engineering for self-adaptive systems: A second research roadmap.

The goal of this roadmap paper is to summarize the stateof-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.

2018-05-27
Cem Aksoylar, George K. Atia, Venkatesh Saligrama.  2013.  Sparse signal processing with linear and non-linear observations: A unified shannon theoretic approach. 2013 {IEEE} Information Theory Workshop, {ITW} 2013, Sevilla, Spain, September 9-13, 2013. :1–5.
2017-02-02
Joseph Sloan, University of Illinois at Urbana-Champaign, Rakesh Kumar, University of Illinois at Urbana-Champaign, Greg Bronevetsky, Lawrence Livermore National Laboratory.  2013.  An Algorithmic Approach to Error Localization and Partial Recomputation for Low-Overhead Fault Tolerance. 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2013).

The increasing size and complexity of massively parallel systems (e.g. HPC systems) is making it increasingly likely that individual circuits will produce erroneous results. For this reason, novel fault tolerance approaches are increasingly needed. Prior fault tolerance approaches often rely on checkpoint-rollback based schemes. Unfortunately, such schemes are primarily limited to rare error event scenarios as the overheads of such schemes become prohibitive if faults are common. In this paper, we propose a novel approach for algorithmic correction of faulty application outputs. The key insight for this approach is that even under high error scenarios, even if the result of an algorithm is erroneous, most of it is correct. Instead of simply rolling back to the most recent checkpoint and repeating the entire segment of computation, our novel resilience approach uses algorithmic error localization and partial recomputation to efficiently correct the corrupted results. We evaluate our approach in the specific algorithmic scenario of linear algebra operations, focusing on matrix-vector multiplication (MVM) and iterative linear solvers. We develop a novel technique for localizing errors in MVM and show how to achieve partial recomputation within this algorithm, and demonstrate that this approach both improves the performance of the Conjugate Gradient solver in high error scenarios by 3x-4x and increases the probability that it completes successfully by up to 60% with parallel experiments up to 100 nodes.

2016-12-05
Erik Zawadzki, Geoffrey Gordon, Andre Platzer.  2013.  A projection algorithm for strictly monotone linear complementarity problems. Proceedings of NIPS OPT2013: Optimization for Machine Learning.

Complementary problems play a central role in equilibrium finding, physical sim- ulation, and optimization.  As a consequence, we are interested in understanding how to solve these problems quickly, and this often involves approximation.  In this paper we present a method for approximately solving strictly monotone linear complementarity problems with a Galerkin approximation.  We also give bounds for the approximate error, and prove novel bounds on perturbation error. These perturbation  bounds suggest that a Galerkin approximation  may be much less sen- sitive to noise than the original LCP.

2018-05-17
J. C. Gallagher, E. T. Matson, G. W. Greenwood.  2013.  On the implications of plug-and-learn adaptive hardware components toward a cyberphysical systems perspective on evolvable and adaptive hardware. 2013 IEEE International Conference on Evolvable Systems (ICES). :59-65.

Evolvable and Adaptive Hardware (EAH) Systems have been a subject of study for about two decades. This paper argues that viewing EAH devices in isolation from the larger systems in which they serve as components is somewhat dangerous in that EAH devices can subvert the design hierarchies upon which designers base verification and validation efforts. The paper proposes augmenting EAH components with additional machinery to enable the application of model-checking and related Cyber-Physical Systems techniques to extract evolving intra-module relationships for formal verification and validation purposes.

2021-02-08
Geetha, C. R., Basavaraju, S., Puttamadappa, C..  2013.  Variable load image steganography using multiple edge detection and minimum error replacement method. 2013 IEEE Conference on Information Communication Technologies. :53—58.

This paper proposes a steganography method using the digital images. Here, we are embedding the data which is to be secured into the digital image. Human Visual System proved that the changes in the image edges are insensitive to human eyes. Therefore we are using edge detection method in steganography to increase data hiding capacity by embedding more data in these edge pixels. So, if we can increase number of edge pixels, we can increase the amount of data that can be hidden in the image. To increase the number of edge pixels, multiple edge detection is employed. Edge detection is carried out using more sophisticated operator like canny operator. To compensate for the resulting decrease in the PSNR because of increase in the amount of data hidden, Minimum Error Replacement [MER] method is used. Therefore, the main goal of image steganography i.e. security with highest embedding capacity and good visual qualities are achieved. To extract the data we need the original image and the embedding ratio. Extraction is done by taking multiple edges detecting the original image and the data is extracted corresponding to the embedding ratio.

2018-05-25
H. Dai, X. Wu, L. Xu, G. Chen, S. Lin.  2013.  Using Minimum Mobile Chargers to Keep Large-Scale Wireless Rechargeable Sensor Networks Running Forever. 2013 22nd International Conference on Computer Communication and Networks (ICCCN). :1-7.
2021-04-08
Colbaugh, R., Glass, K., Bauer, T..  2013.  Dynamic information-theoretic measures for security informatics. 2013 IEEE International Conference on Intelligence and Security Informatics. :45–49.
Many important security informatics problems require consideration of dynamical phenomena for their solution; examples include predicting the behavior of individuals in social networks and distinguishing malicious and innocent computer network activities based on activity traces. While information theory offers powerful tools for analyzing dynamical processes, to date the application of information-theoretic methods in security domains has focused on static analyses (e.g., cryptography, natural language processing). This paper leverages information-theoretic concepts and measures to quantify the similarity of pairs of stochastic dynamical systems, and shows that this capability can be used to solve important problems which arise in security applications. We begin by presenting a concise review of the information theory required for our development, and then address two challenging tasks: 1.) characterizing the way influence propagates through social networks, and 2.) distinguishing malware from legitimate software based on the instruction sequences of the disassembled programs. In each application, case studies involving real-world datasets demonstrate that the proposed techniques outperform standard methods.
2014-09-17
Colbaugh, R., Glass, K..  2013.  Moving target defense for adaptive adversaries. Intelligence and Security Informatics (ISI), 2013 IEEE International Conference on. :50-55.

Machine learning (ML) plays a central role in the solution of many security problems, for example enabling malicious and innocent activities to be rapidly and accurately distinguished and appropriate actions to be taken. Unfortunately, a standard assumption in ML - that the training and test data are identically distributed - is typically violated in security applications, leading to degraded algorithm performance and reduced security. Previous research has attempted to address this challenge by developing ML algorithms which are either robust to differences between training and test data or are able to predict and account for these differences. This paper adopts a different approach, developing a class of moving target (MT) defenses that are difficult for adversaries to reverse-engineer, which in turn decreases the adversaries' ability to generate training/test data differences that benefit them. We leverage the coevolutionary relationship between attackers and defenders to derive a simple, flexible MT defense strategy which is optimal or nearly optimal for a broad range of security problems. Case studies involving two distinct cyber defense applications demonstrate that the proposed MT algorithm outperforms standard static methods, offering effective defense against intelligent, adaptive adversaries.

Parno, B., Howell, J., Gentry, C., Raykova, M..  2013.  Pinocchio: Nearly Practical Verifiable Computation. Security and Privacy (SP), 2013 IEEE Symposium on. :238-252.

To instill greater confidence in computations outsourced to the cloud, clients should be able to verify the correctness of the results returned. To this end, we introduce Pinocchio, a built system for efficiently verifying general computations while relying only on cryptographic assumptions. With Pinocchio, the client creates a public evaluation key to describe her computation; this setup is proportional to evaluating the computation once. The worker then evaluates the computation on a particular input and uses the evaluation key to produce a proof of correctness. The proof is only 288 bytes, regardless of the computation performed or the size of the inputs and outputs. Anyone can use a public verification key to check the proof. Crucially, our evaluation on seven applications demonstrates that Pinocchio is efficient in practice too. Pinocchio's verification time is typically 10ms: 5-7 orders of magnitude less than previous work; indeed Pinocchio is the first general-purpose system to demonstrate verification cheaper than native execution (for some apps). Pinocchio also reduces the worker's proof effort by an additional 19-60x. As an additional feature, Pinocchio generalizes to zero-knowledge proofs at a negligible cost over the base protocol. Finally, to aid development, Pinocchio provides an end-to-end toolchain that compiles a subset of C into programs that implement the verifiable computation protocol.

2022-04-20
Hassell, Suzanne, Beraud, Paul, Cruz, Alen, Ganga, Gangadhar, Martin, Steve, Toennies, Justin, Vazquez, Pablo, Wright, Gary, Gomez, Daniel, Pietryka, Frank et al..  2012.  Evaluating network cyber resiliency methods using cyber threat, Vulnerability and Defense Modeling and Simulation. MILCOM 2012 - 2012 IEEE Military Communications Conference. :1—6.
This paper describes a Cyber Threat, Vulnerability and Defense Modeling and Simulation tool kit used for evaluation of systems and networks to improve cyber resiliency. This capability is used to help increase the resiliency of networks at various stages of their lifecycle, from initial design and architecture through the operation of deployed systems and networks. Resiliency of computer systems and networks to cyber threats is facilitated by the modeling of agile and resilient defenses versus threats and running multiple simulations evaluated against resiliency metrics. This helps network designers, cyber analysts and Security Operations Center personnel to perform trades using what-if scenarios to select resiliency capabilities and optimally design and configure cyber resiliency capabilities for their systems and networks.
2018-06-04
Evans, Travis, Heaslip, Kevin, Boggs, Wesley, Hurwitz, David, Gardiner, Kevin.  2012.  Assessment of sign retroreflectivity compliance for development of a management plan. Transportation Research Record: Journal of the Transportation Research Board. :103–112.
2018-05-27
George K. Atia, Venkatesh Saligrama.  2012.  Boolean Compressed Sensing and Noisy Group Testing. {IEEE} Trans. Information Theory. 58:1880–1901.