Biblio
Content delivery such as P2P or video streaming generates the main part of the Internet traffic and Content Centric Network (CCN) appears as an appropriate architecture to satisfy the user needs. However, the lack of scalable routing scheme is one of the main obstacles that slows down a large deployment of CCN at an Internet-scale. In this paper we propose to use the Software-Defined Networking (SDN) paradigm to decouple data plane and control plane and present SRSC, a new routing scheme for CCN. Our solution is a clean-slate approach using only CCN messages and the SDN paradigm. We implemented our solution into the NS-3 simulator and perform simulations of our proposal. SRSC shows better performances than the flooding scheme used by default in CCN: it reduces the number of messages, while still improves CCN caching performances.
Early Access DOI: 10.1109/TAC.2017.2713529
We consider linear time-invariant networks with unknown interaction topology where only a subset of the nodes, termed manifest, can be directly controlled and observed. The remaining nodes are termed latent and their number is also unknown. Our goal is to identify the transfer function of the manifest subnetwork and determine whether interactions between manifest nodes are direct or mediated by latent nodes. We show that, if there are no inputs to the latent nodes, then the manifest transfer function can be approximated arbitrarily well in the $H_ınfty}$-norm sense by the transfer function of an auto-regressive model. Motivated by this result, we present a least-squares estimation method to construct the auto-regressive model from measured data. We establish that the least-squares matrix estimate converges in probability to the matrix sequence defining the desired auto-regressive model as the length of data and the model order grow. We also show that the least-squares auto-regressive method guarantees an arbitrarily small $H_ınfty$-norm error in the approximation of the manifest transfer function, exponentially decaying once the model order exceeds a certain threshold. Finally, we show that when the latent subnetwork is acyclic, the proposed method achieves perfect identification of the manifest transfer function above a specific model order as the length of the data increases. Various examples illustrate our results.
To appear
This paper studies the problem of privacy-preserving average consensus in multi-agent systems. The network objective is to compute the average of the initial agent states while keeping these values differentially private against an adversary that has access to all inter-agent messages. We establish an impossibility result that shows that exact average consensus cannot be achieved by any algorithm that preserves differential privacy. This result motives our design of a differentially private discrete-time distributed algorithm that corrupts messages with Laplacian noise and is guaranteed to achieve average consensus in expectation. We examine how to optimally select the noise parameters in order to minimize the variance of the network convergence point for a desired level of privacy.
it IFAC Workshop on Distributed Estimation and Control in Networked Systems}, Philadelphia, PA
We study a class of distributed convex constrained optimization problem where a group of agents aims to minimize the sum of individual objective functions while each desires to keep its function differentially private. We prove the impossibility of achieving differential privacy using strategies based on perturbing with noise the inter-agent messages when the underlying noise-free dynamics is asymptotically stable. This justifies our algorithmic solution based on the perturbation of the individual objective functions with Laplace noise within the framework of functional differential privacy. We carefully design post-processing steps that ensure the perturbed functions regain the smoothness and convexity properties of the original functions while preserving the differentially private guarantees of the functional perturbation step. This methodology allows to use any distributed coordination algorithm to solve the optimization problem on the noisy functions. Finally, we explicitly bound the magnitude of the expected distance between the perturbed and true optimizers, and characterize the privacy-accuracy trade-off. Simulations illustrate our results.
To appear
Submitted
This paper studies the multi-agent average consensus problem under the requirement of differential privacy of the agents' initial states against an adversary that has access to all messages. As a fundamental limitation, we first establish that a differentially private consensus algorithm cannot guarantee convergence of the agents' states to the exact average in distribution, which in turn implies the same impossibility for other stronger notions of convergence. This result motives our design of a novel differentially private Laplacian consensus algorithm in which agents linearly perturb their state-transition and message-generating functions with exponentially decaying Laplace noise. We prove that our algorithm converges almost surely to an unbiased estimate of the average of the agents' initial states, compute the exponential mean-square rate of convergence, and formally characterize its differential privacy properties. Furthermore, we also find explicit optimal values of the design parameters that minimize the variance of the algorithm's convergence point around the exact average. Various simulations illustrate our results.
Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.
Data security has always been a major concern and a huge challenge for governments and individuals throughout the world since early times. Recent advances in technology, such as the introduction of cloud computing, make it even a bigger challenge to keep data secure. In parallel, high throughput mobile devices such as smartphones and tablets are designed to support these new technologies. The high throughput requires power-efficient designs to maintain the battery-life. In this paper, we propose a novel Joint Security and Advanced Low Density Parity Check (LDPC) Coding (JSALC) method. The JSALC is composed of two parts: the Joint Security and Advanced LDPC-based Encryption (JSALE) and the dual-step Secure LDPC code for Channel Coding (SLCC). The JSALE is obtained by interlacing Advanced Encryption System (AES)-like rounds and Quasi-Cyclic (QC)-LDPC rows into a single primitive. Both the JSALE code and the SLCC code share the same base quasi-cyclic parity check matrix (PCM) which retains the power efficiency compared to conventional systems. We show that the overall JSALC Frame-Error-Rate (FER) performance outperforms other cryptcoding methods by over 1.5 dB while maintaining the AES-128 security level. Moreover, the JSALC enables error resilience and has higher diffusion than AES-128.
With the growth of technology, designs became more complex and may contain bugs. This makes verification an indispensable part in product development. UVM describe a standard method for verification of designs which is reusable and portable. This paper verifies IIC bus protocol using Universal Verification Methodology. IIC controller is designed in Verilog using Vivado. It have APB interface and its function and code coverage is carried out in Mentor graphic Questasim 10.4e. This work achieved 83.87% code coverage and 91.11% functional coverage.
This paper revealed the development and implementation of the wearable sensors based on transient responses of textile chemical sensors for odorant detection system as wearable sensor of humanoid robot. The textile chemical sensors consist of nine polymer/CNTs nano-composite gas sensors which can be divided into three different prototypes of the wearable humanoid robot; (i) human axillary odor monitoring, (ii) human foot odor tracking, and (iii) wearable personal gas leakage detection. These prototypes can be integrated into high-performance wearable wellness platform such as smart clothes, smart shoes and wearable pocket toxic-gas detector. While operating mode has been designed to use ZigBee wireless communication technology for data acquisition and monitoring system. Wearable humanoid robot offers several platforms that can be applied to investigate the role of individual scent produced by different parts of the human body such as axillary odor and foot odor, which have potential health effects from abnormal or offensive body odor. Moreover, wearable personal safety and security component in robot is also effective for detecting NH3 leakage in environment. Preliminary results with nine textile chemical sensors for odor biomarker and NH3 detection demonstrates the feasibility of using the wearable humanoid robot to distinguish unpleasant odor released when you're physically active. It also showed an excellent performance to detect a hazardous gas like ammonia (NH3) with sensitivity as low as 5 ppm.
Processes to automate the selection of appropriate algorithms for various matrix computations are described. In particular, processes to check for, and certify, various matrix properties of black-box matrices are presented. These include sparsity patterns and structural properties that allow "superfast" algorithms to be used in place of black-box algorithms. Matrix properties that hold generically, and allow the use of matrix preconditioning to be reduced or eliminated, can also be checked for and certified –- notably including in the small-field case, where this presently has the greatest impact on the efficiency of the computation.
Information, not just data, is key to today's global challenges. To solve these challenges requires not only advancing geospatial and big data analytics but requires new analysis and decision-making environments that enable reliable decisions from trustable, understandable information that go beyond current approaches to machine learning and artificial intelligence. These environments are successful when they effectively couple human decision making with advanced, guided spatial analytics in human-computer collaborative discourse and decision making (HCCD). Our HCCD approach builds upon visual analytics, natural scale templates, traceable information, human-guided analytics, and explainable and interactive machine learning, focusing on empowering the decisionmaker through interactive visual spatial analytic environments where non-digital human expertise and experience can be combined with state-of-the-art and transparent analytical techniques. When we combine this approach with real-world application-driven research, not only does the pace of scientific innovation accelerate, but impactful change occurs. I'll describe how we have applied these techniques to challenges in sustainability, security, resiliency, public safety, and disaster management.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.