Visible to the public Biblio

Found 2320 results

Filters: First Letter Of Last Name is P  [Clear All Filters]
2015-11-23
Peter Dinges, University of Illinois at Urbana-Champaign, Minas Charalambides, University of Illinois at Urbana-Champaign, Gul Agha, University of Illinois at Urbana-Champaign.  2013.  Automated Inference of Atomic Sets for Safe Concurrent Execution. 11th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering .

Atomic sets are a synchronization mechanism in which the programmer specifies the groups of data that must be ac- cessed as a unit. The compiler can check this specifica- tion for consistency, detect deadlocks, and automatically add the primitives to prevent interleaved access. Atomic sets relieve the programmer from the burden of recognizing and pruning execution paths which lead to interleaved ac- cess, thereby reducing the potential for data races. However, manually converting programs from lock-based synchroniza- tion to atomic sets requires reasoning about the program’s concurrency structure, which can be a challenge even for small programs. Our analysis eliminates the challenge by automating the reasoning. Our implementation of the anal- ysis allowed us to derive the atomic sets for large code bases such as the Java collections framework in a matter of min- utes. The analysis is based on execution traces; assuming all traces reflect intended behavior, our analysis enables safe concurrency by preventing unobserved interleavings which may harbor latent Heisenbugs.

Minas Charalambides, University of Illinois at Urbana-Champaign, Peter Dinges, University of Illinois at Urbana-Champaign, Gul Agha, University of Illinois at Urbana-Champaign.  2012.  Parameterized Concurrent Multi-Party Session Types. 11th International Workshop on Foundations of Coordination Languages and Self-Adaptive Systems (FOCLASA 2012). 91:16-30.

Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols.

2015-11-18
Fan Yang, University of Illinois at Urbana-Champaign, Santiago Escobar, Universidad Politécnica de Valencia, Spain, Catherine Meadows, Naval Research Laboratory, Jose Meseguer, University of Illinois at Urbana-Champaign, Paliath Narendran, University at Albany-SUNY.  2014.  Theories for Homomorphic Encryption, Unification and the Finite Variant Property. 16th International Symposium on Principles and Practice of Declarative Programming (PPDP 2014).

Recent advances in the automated analysis of cryptographic protocols have aroused new interest in the practical application of unification modulo theories, especially theories that describe the algebraic properties of cryptosystems. However, this application requires unification algorithms that can be easily implemented and easily extended to combinations of different theories of interest. In practice this has meant that most tools use a version of a technique known as variant unification. This requires, among other things, that the theory be decomposable into a set of axioms B and a set of rewrite rules R such that R has the finite variant property with respect to B. Most theories that arise in cryptographic protocols have decompositions suitable for variant unification, but there is one major exception: the theory that describes encryption that is homomorphic over an Abelian group.

In this paper we address this problem by studying various approximations of homomorphic encryption over an Abelian group. We construct a hierarchy of increasingly richer theories, taking advantage of new results that allow us to automatically verify that their decompositions have the finite variant property. This new verification procedure also allows us to construct a rough metric of the complexity of a theory with respect to variant unification, or variant complexity. We specify different versions of protocols using the different theories, and analyze them in the Maude-NPA cryptographic protocol analysis tool to assess their behavior. This gives us greater understanding of how the theories behave in actual application, and suggests possible techniques for improving performance.

Serdar Erbatur, Università degli Studi di Verona, Santiago Escobar, Universidad Politécnica de Valencia, Spain, Deepak Kapur, University of New Mexico, Zhiqiang Liu, Clarkson University, Christopher A. Lynch, Clarkson University, Catherine Meadows, Naval Research Laboratory, Jose Meseguer, University of Illinois at Urbana-Champaign, Paliath Narendran, University at Albany-SUNY, Sonia Santiago, Universidad Politécnica de Valencia, Spain, Ralf Sasse, Institute of Information Security, ETH.  2013.  Asymmetric Unification: A New Unification Paradigm for Cryptographic Protocol Analysis. 24th International Conference on Automated Deduction (CADE 2013) .

We present a new paradigm for unification arising out of a technique commonly used in cryptographic protocol analysis tools that employ unification modulo equational theories. This paradigm relies on: (i) a decomposition of an equational theory into (R, E) where R is confluent, terminating, and coherent modulo E, and (ii) on reducing unifi- cation problems to a set of problems s =? t under the constraint that t remains R/E-irreducible. We call this method asymmetric unification . We first present a general-purpose generic asymmetric unification algorithm.and then outline an approach for converting special-purpose conventional unification algorithms to asymmetric ones, demonstrating it for exclusive-or with uninterpreted function symbols. We demonstrate how asymmetric unification can improve performanceby running the algorithm on a set of benchmark problems. We also give results on the complexity and decidability of asymmetric unification.

 

 

Phuong Cao, University of Illinois at Urbana-Champaign, Eric Badger, University of Illinois at Urbana-Champaign, Adam Slagell, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar Iyer, University of Illinois at Urbana-Champaign.  2015.  Preemptive Intrusion Detection: Theoretical Framework and Real-World Measurements. Symposium and Bootcamp on the Science of Security, (HotSoS 2015).

This paper presents a Factor Graph based framework called AttackTagger for highly accurate and preemptive detection of attacks, i.e., before the system misuse. We use secu- rity logs on real incidents that occurred over a six-year pe- riod at the National Center for Supercomputing Applica- tions (NCSA) to evaluate AttackTagger. Our data consist of security incidents that led to compromise of the target system, i.e., the attacks in the incidents were only identified after the fact by security analysts. AttackTagger detected 74 percent of attacks, and the majority them were detected before the system misuse. Finally, AttackTagger uncovered six hidden attacks that were not detected by intrusion de- tection systems during the incidents or by security analysts in post-incident forensic analysis.

2015-11-16
Phuong Cao, University of Illinois at Urbana-Champaign, Eric C. Badger, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign, Alexander Withers, University of Illinois at Urbana-Champaign, Adam J. Slagell, University of Illinois at Urbana-Champaign.  2015.  Towards an Unified Security Testbed and Security Analytics Framework. Symposium and Bootcamp for the Science of Security (HotSoS 2015).

This paper presents the architecture of an end-to-end secu- rity testbed and security analytics framework, which aims to: i) understand real-world exploitation of known security vulnerabilities and ii) preemptively detect multi-stage at- tacks, i.e., before the system misuse. With the increasing number of security vulnerabilities, it is necessary for secu- rity researchers and practitioners to understand: i) system and network behaviors under attacks and ii) potential ef- fects of attacks to the target infrastructure. To safely em- ulate and instrument exploits of known vulnerabilities, we use virtualization techniques to isolate attacks in contain- ers, e.g., Linux-based containers or Virtual Machines, and to deploy monitors, e.g., kernel probes or network packet captures, across a system and network stack. To infer the evolution of attack stages from monitoring data, we use a probabilistic graphical model, namely AttackTagger, that represents learned knowledge of simulated attacks in our se- curity testbed and real-world attacks. Experiments are be- ing run on a real-world deployment of the framework at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign.

Cuong Pham, University of Illinois at Urbana-Champaign, Zachary J. Estrada, University of Illinois at Urbana-Champaign, Phuong Cao, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2014.  Building Reliable and Secure Virtual Machines using Architectural Invariants. IEEE Security and Privacy. 12(5):82-85.

Reliability and security tend to be treated separately because they appear orthogonal: reliability focuses on accidental failures, security on intentional attacks. Because of the apparent dissimilarity between the two, tools to detect and recover from different classes of failures and attacks are usually designed and implemented differently. So, integrating support for reliability and security in a single framework is a significant challenge.

Here, we discuss how to address this challenge in the context of cloud computing, for which reliability and security are growing concerns. Because cloud deployments usually consist of commodity hardware and software, efficient monitoring is key to achieving resiliency. Although reliability and security monitoring might use different types of analytics, the same sensing infrastructure can provide inputs to monitoring modules.

We split monitoring into two phases: logging and auditing. Logging captures data or events; it constitutes the framework’s core and is common to all monitors. Auditing analyzes data or events; it’s implemented and operated independently by each monitor. To support a range of auditing policies, logging must capture a complete view, including both actions and states of target systems. It must also provide useful, trustworthy information regarding the captured view.

We applied these principles when designing HyperTap, a hypervisor-level monitoring framework for virtual machines (VMs). Unlike most VM-monitoring techniques, HyperTap employs hardware architectural invariants (hardware invariants, for short) to establish the root of trust for logging. Hardware invariants are properties defined and enforced by a hardware platform (for example, the x86 instruction set architecture). Additionally, HyperTap supports continuous, event-driven VM monitoring, which enables both capturing the system state and responding rapidly to actions of interest.

2015-11-11
Toshiki Kataoka, Dusko Pavlovic.  2015.  Towards Concept Analysis in Categories: Limit Inferior as Algebra, Limit Superior as Coalgebra. 6th Conference on Algebra and Coalgebra in Computer Science (CALCO 2015). 35:130–155.

While computer programs and logical theories begin by declaring the concepts of interest, be it as data types or as predicates, network computation does not allow such global declarations, and requires concept mining and concept analysis to extract shared semantics for different network nodes. Powerful semantic analysis systems have been the drivers of nearly all paradigm shifts on the web. In categorical terms, most of them can be described as bicompletions of enriched matrices, generalizing the Dedekind-MacNeille-style completions from posets to suitably enriched categories. Yet it has been well known for more than 40 years that ordinary categories themselves in general do not permit such completions. Armed with this new semantical view of Dedekind-MacNeille completions, and of matrix bicompletions, we take another look at this ancient mystery. It turns out that simple categorical versions of the limit superior and limit inferior operations characterize a general notion of Dedekind-MacNeille completion, that seems to be appropriate for ordinary categories, and boils down to the more familiar enriched versions when the limits inferior and superior coincide. This explains away the apparent gap among the completions of ordinary categories, and broadens the path towards categorical concept mining and analysis, opened in previous work.

Wenxuan Zhou, University of Illinois at Urbana-Champaign, Dong Jin, Illinois Institute of Technology, Jason Croft, University of Illinois at Urbana-Champaign, Matthew Caesar, University of Illinois at Urbana-Champaign, P. Brighten Godfrey, University of Illinois at Urbana-Champaign.  2015.  Enforcing Customizable Consistency Properties in Software-Defined Networks. 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 2015).

It is critical to ensure that network policy remains consistent during state transitions. However, existing techniques impose a high cost in update delay, and/or FIB space. We propose the Customizable Consistency Generator (CCG), a fast and generic framework to support customizable consistency policies during network updates. CCG effectively reduces the task of synthesizing an update plan under the constraint of a given consistency policy to a verification problem, by checking whether an update can safely be installed in the network at a particular time, and greedily processing network state transitions to heuristically minimize transition delay. We show a large class of consistency policies are guaranteed by this greedy heuristic alone; in addition, CCG makes judicious use of existing heavier-weight network update mechanisms to provide guarantees when necessary. As such, CCG nearly achieves the “best of both worlds”: the efficiency of simply passing through updates in most cases, with the consistency guarantees of more heavyweight techniques. Mininet and physical testbed evaluations demonstrate CCG’s capability to achieve various types of consistency, such as path and bandwidth properties, with zero switch memory overhead and up to a 3× delay reduction compared to previous solutions.

2015-09-28
2015-06-30
Yang, Weining, Chen, Jing, Xiong, Aiping, Proctor, Robert W, Li, Ninghui.  2015.  Effectiveness of a phishing warning in field settings. Proceedings of the 2015 Symposium and Bootcamp on the Science of Security. :14.

We have begun to investigate the effectiveness of a phishing warning Chrome extension in a field setting of everyday computer use. A preliminary experiment has been conducted in which participants installed and used the extension. They were required to fill out an online browsing behavior questionnaire by clicking on a survey link sent in a weekly email by us. Two phishing attacks were simulated during the study by directing participants to "fake" (phishing) survey sites we created. Almost all participants who saw the warnings on our fake sites input incorrect passwords, but follow-up interviews revealed that only one participant did so intentionally. A follow-up interview revealed that the warning failure was mainly due to the survey task being mandatory. Another finding of interest from the interview was that about 50% of the participants had never heard of phishing or did not understand its meaning.

2015-05-06
Lingyu Wang, Jajodia, S., Singhal, A., Pengsu Cheng, Noel, S..  2014.  k-Zero Day Safety: A Network Security Metric for Measuring the Risk of Unknown Vulnerabilities. Dependable and Secure Computing, IEEE Transactions on. 11:30-44.

By enabling a direct comparison of different security solutions with respect to their relative effectiveness, a network security metric may provide quantifiable evidences to assist security practitioners in securing computer networks. However, research on security metrics has been hindered by difficulties in handling zero-day attacks exploiting unknown vulnerabilities. In fact, the security risk of unknown vulnerabilities has been considered as something unmeasurable due to the less predictable nature of software flaws. This causes a major difficulty to security metrics, because a more secure configuration would be of little value if it were equally susceptible to zero-day attacks. In this paper, we propose a novel security metric, k-zero day safety, to address this issue. Instead of attempting to rank unknown vulnerabilities, our metric counts how many such vulnerabilities would be required for compromising network assets; a larger count implies more security because the likelihood of having more unknown vulnerabilities available, applicable, and exploitable all at the same time will be significantly lower. We formally define the metric, analyze the complexity of computing the metric, devise heuristic algorithms for intractable cases, and finally demonstrate through case studies that applying the metric to existing network security practices may generate actionable knowledge.

Pandey, S.K., Mehtre, B.M..  2014.  A Lifecycle Based Approach for Malware Analysis. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :767-771.

Most of the detection approaches like Signature based, Anomaly based and Specification based are not able to analyze and detect all types of malware. Signature-based approach for malware detection has one major drawback that it cannot detect zero-day attacks. The fundamental limitation of anomaly based approach is its high false alarm rate. And specification-based detection often has difficulty to specify completely and accurately the entire set of valid behaviors a malware should exhibit. Modern malware developers try to avoid detection by using several techniques such as polymorphic, metamorphic and also some of the hiding techniques. In order to overcome these issues, we propose a new approach for malware analysis and detection that consist of the following twelve stages Inbound Scan, Inbound Attack, Spontaneous Attack, Client-Side Exploit, Egg Download, Device Infection, Local Reconnaissance, Network Surveillance, & Communications, Peer Coordination, Attack Preparation, and Malicious Outbound Propagation. These all stages will integrate together as interrelated process in our proposed approach. This approach had solved the limitations of all the three approaches by monitoring the behavioral activity of malware at each any every stage of life cycle and then finally it will give a report of the maliciousness of the files or software's.

Xinhai Zhang, Persson, M., Nyberg, M., Mokhtari, B., Einarson, A., Linder, H., Westman, J., DeJiu Chen, Torngren, M..  2014.  Experience on applying software architecture recovery to automotive embedded systems. Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on. :379-382.

The importance and potential advantages with a comprehensive product architecture description are well described in the literature. However, developing such a description takes additional resources, and it is difficult to maintain consistency with evolving implementations. This paper presents an approach and industrial experience which is based on architecture recovery from source code at truck manufacturer Scania CV AB. The extracted representation of the architecture is presented in several views and verified on CAN signal level. Lessons learned are discussed.
 

Béraud-Sudreau, Q., Begueret, J.-B., Mazouffre, O., Pignol, M., Baguena, L., Neveu, C., Deval, Y., Taris, T..  2014.  SiGe Clock and Data Recovery System Based on Injection-Locked Oscillator for 100 Gbit/s Serial Data Link. Solid-State Circuits, IEEE Journal of. 49:1895-1904.

Clock and data recovery (CDR) systems are the first logic blocks in serial data receivers and the latter's performance depends on the CDR. In this paper, a 100 Gbit/s CDR designed in 130 nm BiCMOS SiGe is presented. The CDR uses an injection locked oscillator (ILO) which delivers the 100 GHz clock. The inherent phase shift between the recovered clock and the incoming data is compensated by a feedback loop which performs phase and frequency tracking. Furthermore, a windowed phase comparator has been used, first to lower the classical number of gates, in order to prevent any delay skews between the different phase detector blocks, then to decrease the phase comparator operating frequency, and furthermore to extend the ability to track zero bit patterns The measurements results demonstrate a 100 GHz clock signal extracted from 50 Gb/s input data, with a phase noise as low as 98 dBc/Hz at 100 kHz offset from the carrier frequency. The rms jitter of the 25 GHz recovered data is only 1.2 ps. The power consumption is 1.4 W under 2.3 V power supply.
 

Perzyk, Marcin, Kochanski, Andrzej, Kozlowski, Jacek, Soroczynski, Artur, Biernacki, Robert.  2014.  Comparison of Data Mining Tools for Significance Analysis of Process Parameters in Applications to Process Fault Diagnosis. Inf. Sci.. 259:380–392.

This paper presents an evaluation of various methodologies used to determine relative significances of input variables in data-driven models. Significance analysis applied to manufacturing process parameters can be a useful tool in fault diagnosis for various types of manufacturing processes. It can also be applied to building models that are used in process control. The relative significances of input variables can be determined by various data mining methods, including relatively simple statistical procedures as well as more advanced machine learning systems. Several methodologies suitable for carrying out classification tasks which are characteristic of fault diagnosis were evaluated and compared from the viewpoint of their accuracy, robustness of results and applicability. Two types of testing data were used: synthetic data with assumed dependencies and real data obtained from the foundry industry. The simple statistical method based on contingency tables revealed the best overall performance, whereas advanced machine learning models, such as ANNs and SVMs, appeared to be of less value.

Apolinarski, W., Iqbal, U., Parreira, J.X..  2014.  The GAMBAS middleware and SDK for smart city applications. Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on. :117-122.

The concept of smart cities envisions services that provide distraction-free support for citizens. To realize this vision, the services must adapt to the citizens' situations, behaviors and intents at runtime. This requires services to gather and process the context of their users. Mobile devices provide a promising basis for determining context in an automated manner on a large scale. However, despite the wide availability of versatile programmable mobile platforms such as Android and iOS, there are only few examples of smart city applications. One reason for this is that existing software platforms primarily focus on low-level resource management which requires application developers to repeatedly tackle many challenging tasks. Examples include efficient data acquisition, secure and privacy-preserving data distribution as well as interoperable data integration. In this paper, we describe the GAMBAS middleware which tries to simplify the development of smart city applications. To do this, GAMBAS introduces a Java-based runtime system with an associated software development kit (SDK). To clarify how the runtime system and the SDK can be used for application development, we describe two simple applications that highlight different middleware functions.

Potdar, M.S., Manekar, A.S., Kadu, R.D..  2014.  Android #x0022;Health-Dr. #x0022; Application for Synchronous Information Sharing. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :265-269.

Android "Health-DR." is innovative idea for ambulatory appliances. In rapid developing technology, we are providing "Health-DR." application for the insurance agent, dispensary, patients, physician, annals management (security) for annals. So principally, the ample of record are maintain in to the hospitals. The application just needs to be installed in the customer site with IT environment. Main purpose of our application is to provide the healthy environment to the patient. Our cream focus is on the "Health-DR." application meet to the patient regiment. For the personal use of member, we provide authentication service strategy for "Health-DR." application. Prospective strategy includes: Professional Authentications (User Authentication) by doctor to the patient, actuary and dispensary. Remote access is available to the medical annals, doctor affability and patient affability. "Health-DR." provides expertness anytime and anywhere. The application is middleware to isolate the information from affability management, client discovery and transit of database. Annotations of records are kept in the bibliography. Mainly, this paper focuses on the conversion of E-Health application with flexible surroundings.
 

Kanewala, T.A., Marru, S., Basney, J., Pierce, M..  2014.  A Credential Store for Multi-tenant Science Gateways. Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on. :445-454.

Science Gateways bridge multiple computational grids and clouds, acting as overlay cyber infrastructure. Gateways have three logical tiers: a user interfacing tier, a resource tier and a bridging middleware tier. Different groups may operate these tiers. This introduces three security challenges. First, the gateway middleware must manage multiple types of credentials associated with different resource providers. Second, the separation of the user interface and middleware layers means that security credentials must be securely delegated from the user interface to the middleware. Third, the same middleware may serve multiple gateways, so the middleware must correctly isolate user credentials associated with different gateways. We examine each of these three scenarios, concentrating on the requirements and implementation of the middleware layer. We propose and investigate the use of a Credential Store to solve the three security challenges.

Pathan, A.C., Potey, M.A..  2014.  Detection of Malicious Transaction in Database Using Log Mining Approach. Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on. :262-265.

Data mining is the process of finding correlations in the relational databases. There are different techniques for identifying malicious database transactions. Many existing approaches which profile is SQL query structures and database user activities to detect intrusion, the log mining approach is the automatic discovery for identifying anomalous database transactions. Mining of the Data is very helpful to end users for extracting useful business information from large database. Multi-level and multi-dimensional data mining are employed to discover data item dependency rules, data sequence rules, domain dependency rules, and domain sequence rules from the database log containing legitimate transactions. Database transactions that do not comply with the rules are identified as malicious transactions. The log mining approach can achieve desired true and false positive rates when the confidence and support are set up appropriately. The implemented system incrementally maintain the data dependency rule sets and optimize the performance of the intrusion detection process.
 

Sanandaji, B.M., Bitar, E., Poolla, K., Vincent, T.L..  2014.  An abrupt change detection heuristic with applications to cyber data attacks on power systems. American Control Conference (ACC), 2014. :5056-5061.

We present an analysis of a heuristic for abrupt change detection of systems with bounded state variations. The proposed analysis is based on the Singular Value Decomposition (SVD) of a history matrix built from system observations. We show that monitoring the largest singular value of the history matrix can be used as a heuristic for detecting abrupt changes in the system outputs. We provide sufficient detectability conditions for the proposed heuristic. As an application, we consider detecting malicious cyber data attacks on power systems and test our proposed heuristic on the IEEE 39-bus testbed.
 

Pajic, M., Weimer, J., Bezzo, N., Tabuada, P., Sokolsky, O., Insup Lee, Pappas, G.J..  2014.  Robustness of attack-resilient state estimators. Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on. :163-174.

The interaction between information technology and phys ical world makes Cyber-Physical Systems (CPS) vulnerable to malicious attacks beyond the standard cyber attacks. This has motivated the need for attack-resilient state estimation. Yet, the existing state-estimators are based on the non-realistic assumption that the exact system model is known. Consequently, in this work we present a method for state estimation in presence of attacks, for systems with noise and modeling errors. When the the estimated states are used by a state-based feedback controller, we show that the attacker cannot destabilize the system by exploiting the difference between the model used for the state estimation and the real physical dynamics of the system. Furthermore, we describe how implementation issues such as jitter, latency and synchronization errors can be mapped into parameters of the state estimation procedure that describe modeling errors, and provide a bound on the state-estimation error caused by modeling errors. This enables mapping control performance requirements into real-time (i.e., timing related) specifications imposed on the underlying platform. Finally, we illustrate and experimentally evaluate this approach on an unmanned ground vehicle case-study.
 

Pura, M.L., Buchs, D..  2014.  A self-organized key management scheme for ad hoc networks based on identity-based cryptography. Communications (COMM), 2014 10th International Conference on. :1-4.

Ad hoc networks represent a very modern technology for providing communication between devices without the need of any prior infrastructure set up, and thus in an “on the spot” manner. But there is a catch: so far there isn't any security scheme that would suit the ad hoc properties of this type of networks and that would also accomplish the needed security objectives. The most promising proposals are the self-organized schemes. This paper presents a work in progress aiming at developing a new self-organized key management scheme that uses identity based cryptography for making impossible some of the attacks that can be performed over the schemes proposed so far, while preserving their advantages. The paper starts with a survey of the most important self-organized key management schemes and a short analysis of the advantages and disadvantages they have. Then, it presents our new scheme, and by using informal analysis, it presents the advantages it has over the other proposals.

Pura, M.L., Buchs, D..  2014.  A self-organized key management scheme for ad hoc networks based on identity-based cryptography. Communications (COMM), 2014 10th International Conference on. :1-4.

Ad hoc networks represent a very modern technology for providing communication between devices without the need of any prior infrastructure set up, and thus in an “on the spot” manner. But there is a catch: so far there isn't any security scheme that would suit the ad hoc properties of this type of networks and that would also accomplish the needed security objectives. The most promising proposals are the self-organized schemes. This paper presents a work in progress aiming at developing a new self-organized key management scheme that uses identity based cryptography for making impossible some of the attacks that can be performed over the schemes proposed so far, while preserving their advantages. The paper starts with a survey of the most important self-organized key management schemes and a short analysis of the advantages and disadvantages they have. Then, it presents our new scheme, and by using informal analysis, it presents the advantages it has over the other proposals.

Plesca, C., Morogan, L..  2014.  Efficient and robust perceptual hashing using log-polar image representation. Communications (COMM), 2014 10th International Conference on. :1-6.

Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These hashes find extensive applications in content authentication, image indexing for database search and watermarking. Modern robust hashing algorithms consist of feature extraction, a randomization stage to introduce non-invertibility, followed by quantization and binary encoding to produce a binary hash. This paper describes a novel algorithm for generating an image hash based on Log-Polar transform features. The Log-Polar transform is a part of the Fourier-Mellin transformation, often used in image recognition and registration techniques due to its invariant properties to geometric operations. First, we show that the proposed perceptual hash is resistant to content-preserving operations like compression, noise addition, moderate geometric and filtering. Second, we illustrate the discriminative capability of our hash in order to rapidly distinguish between two perceptually different images. Third, we study the security of our method for image authentication purposes. Finally, we show that the proposed hashing method can provide both excellent security and robustness.