Biblio

Found 276 results

Filters: First Letter Of Last Name is E  [Clear All Filters]
A B C D [E] F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
E
E. Aubry, T. Silverston, I. Chrisment.  2015.  "SRSC: SDN-based routing scheme for CCN". Proceedings of the 2015 1st IEEE Conference on Network Softwarization (NetSoft). :1-5.

Content delivery such as P2P or video streaming generates the main part of the Internet traffic and Content Centric Network (CCN) appears as an appropriate architecture to satisfy the user needs. However, the lack of scalable routing scheme is one of the main obstacles that slows down a large deployment of CCN at an Internet-scale. In this paper we propose to use the Software-Defined Networking (SDN) paradigm to decouple data plane and control plane and present SRSC, a new routing scheme for CCN. Our solution is a clean-slate approach using only CCN messages and the SDN paradigm. We implemented our solution into the NS-3 simulator and perform simulations of our proposal. SRSC shows better performances than the flooding scheme used by default in CCN: it reduces the number of messages, while still improves CCN caching performances.

E. Mallada.  2016.  iDroop: A Dynamic Droop controller to decouple power grid's steady-state and dynamic performance. 2016 IEEE 55th Conference on Decision and Control (CDC). :4957-4964.
E. Mallada, C. Zhao, S. Low.  2017.  Optimal Load-side Control for Frequency Regulation in Smart Grids. IEEE Trans. on Automatic Control (published online).

Early Access DOI: 10.1109/TAC.2017.2713529

E. Nozari, Y. Zhao, J. Cortes.  2018.  Network identification with latent nodes via auto-regressive models. tcns.

We consider linear time-invariant networks with unknown interaction topology where only a subset of the nodes, termed manifest, can be directly controlled and observed. The remaining nodes are termed latent and their number is also unknown. Our goal is to identify the transfer function of the manifest subnetwork and determine whether interactions between manifest nodes are direct or mediated by latent nodes. We show that, if there are no inputs to the latent nodes, then the manifest transfer function can be approximated arbitrarily well in the $H_ınfty}$-norm sense by the transfer function of an auto-regressive model. Motivated by this result, we present a least-squares estimation method to construct the auto-regressive model from measured data. We establish that the least-squares matrix estimate converges in probability to the matrix sequence defining the desired auto-regressive model as the length of data and the model order grow. We also show that the least-squares auto-regressive method guarantees an arbitrarily small $H_ınfty$-norm error in the approximation of the manifest transfer function, exponentially decaying once the model order exceeds a certain threshold. Finally, we show that when the latent subnetwork is acyclic, the proposed method achieves perfect identification of the manifest transfer function above a specific model order as the length of the data increases. Various examples illustrate our results.

To appear

E. Nozari, P. Tallapragada, J. Cortes.  2015.  Differentially private average consensus with optimal noise selection. IFAC-PapersOnLine. 48:203-208.

This paper studies the problem of privacy-preserving average consensus in multi-agent systems. The network objective is to compute the average of the initial agent states while keeping these values differentially private against an adversary that has access to all inter-agent messages. We establish an impossibility result that shows that exact average consensus cannot be achieved by any algorithm that preserves differential privacy. This result motives our design of a differentially private discrete-time distributed algorithm that corrupts messages with Laplacian noise and is guaranteed to achieve average consensus in expectation. We examine how to optimally select the noise parameters in order to minimize the variance of the network convergence point for a desired level of privacy.

it IFAC Workshop on Distributed Estimation and Control in Networked Systems}, Philadelphia, PA

E. Nozari, P. Tallapragada, J. Cortes.  2017.  Differentially Private Distributed Convex Optimization via Functional Perturbation.

We study a class of distributed convex constrained optimization problem where a group of agents aims to minimize the sum of individual objective functions while each desires to keep its function differentially private. We prove the impossibility of achieving differential privacy using strategies based on perturbing with noise the inter-agent messages when the underlying noise-free dynamics is asymptotically stable. This justifies our algorithmic solution based on the perturbation of the individual objective functions with Laplace noise within the framework of functional differential privacy. We carefully design post-processing steps that ensure the perturbed functions regain the smoothness and convexity properties of the original functions while preserving the differentially private guarantees of the functional perturbation step. This methodology allows to use any distributed coordination algorithm to solve the optimization problem on the noisy functions. Finally, we explicitly bound the magnitude of the expected distance between the perturbed and true optimizers, and characterize the privacy-accuracy trade-off. Simulations illustrate our results.

To appear

E. Nozari, P. Tallapragada, J. Cortes.  2017.  Differentially private average consensus: obstructions, trade-offs, and optimal algorithm design. automatica. 81:221-231.

This paper studies the multi-agent average consensus problem under the requirement of differential privacy of the agents' initial states against an adversary that has access to all messages. As a fundamental limitation, we first establish that a differentially private consensus algorithm cannot guarantee convergence of the agents' states to the exact average in distribution, which in turn implies the same impossibility for other stronger notions of convergence. This result motives our design of a novel differentially private Laplacian consensus algorithm in which agents linearly perturb their state-transition and message-generating functions with exponentially decaying Laplace noise. We prove that our algorithm converges almost surely to an unbiased estimate of the average of the agents' initial states, compute the exponential mean-square rate of convergence, and formally characterize its differential privacy properties. Furthermore, we also find explicit optimal values of the design parameters that minimize the variance of the algorithm's convergence point around the exact average. Various simulations illustrate our results.

E. Peterson.  2016.  Dagger: Modeling and visualization for mission impact situation awareness. MILCOM 2016 - 2016 IEEE Military Communications Conference. :25-30.

Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.

E. Pisek, S. Abu-Surra, R. Taori, J. Dunham, D. Rajan.  2015.  "Enhanced Cryptcoding: Joint Security and Advanced Dual-Step Quasi-Cyclic LDPC Coding". 2015 IEEE Global Communications Conference (GLOBECOM). :1-7.

Data security has always been a major concern and a huge challenge for governments and individuals throughout the world since early times. Recent advances in technology, such as the introduction of cloud computing, make it even a bigger challenge to keep data secure. In parallel, high throughput mobile devices such as smartphones and tablets are designed to support these new technologies. The high throughput requires power-efficient designs to maintain the battery-life. In this paper, we propose a novel Joint Security and Advanced Low Density Parity Check (LDPC) Coding (JSALC) method. The JSALC is composed of two parts: the Joint Security and Advanced LDPC-based Encryption (JSALE) and the dual-step Secure LDPC code for Channel Coding (SLCC). The JSALE is obtained by interlacing Advanced Encryption System (AES)-like rounds and Quasi-Cyclic (QC)-LDPC rows into a single primitive. Both the JSALE code and the SLCC code share the same base quasi-cyclic parity check matrix (PCM) which retains the power efficiency compared to conventional systems. We show that the overall JSALC Frame-Error-Rate (FER) performance outperforms other cryptcoding methods by over 1.5 dB while maintaining the AES-128 security level. Moreover, the JSALC enables error resilience and has higher diffusion than AES-128.

E.V., Jaideep Varier, V., Prabakar, Balamurugan, Karthigha.  2019.  Design of Generic Verification Procedure for IIC Protocol in UVM. 2019 3rd International Conference on Electronics, Communication and Aerospace Technology (ICECA). :1146-1150.

With the growth of technology, designs became more complex and may contain bugs. This makes verification an indispensable part in product development. UVM describe a standard method for verification of designs which is reusable and portable. This paper verifies IIC bus protocol using Universal Verification Methodology. IIC controller is designed in Verilog using Vivado. It have APB interface and its function and code coverage is carried out in Mentor graphic Questasim 10.4e. This work achieved 83.87% code coverage and 91.11% functional coverage.

Eamsa-ard, T., Seesaard, T., Kerdcharoen, T..  2018.  Wearable Sensor of Humanoid Robot-Based Textile Chemical Sensors for Odor Detection and Tracking. 2018 International Conference on Engineering, Applied Sciences, and Technology (ICEAST). :1—4.

This paper revealed the development and implementation of the wearable sensors based on transient responses of textile chemical sensors for odorant detection system as wearable sensor of humanoid robot. The textile chemical sensors consist of nine polymer/CNTs nano-composite gas sensors which can be divided into three different prototypes of the wearable humanoid robot; (i) human axillary odor monitoring, (ii) human foot odor tracking, and (iii) wearable personal gas leakage detection. These prototypes can be integrated into high-performance wearable wellness platform such as smart clothes, smart shoes and wearable pocket toxic-gas detector. While operating mode has been designed to use ZigBee wireless communication technology for data acquisition and monitoring system. Wearable humanoid robot offers several platforms that can be applied to investigate the role of individual scent produced by different parts of the human body such as axillary odor and foot odor, which have potential health effects from abnormal or offensive body odor. Moreover, wearable personal safety and security component in robot is also effective for detecting NH3 leakage in environment. Preliminary results with nine textile chemical sensors for odor biomarker and NH3 detection demonstrates the feasibility of using the wearable humanoid robot to distinguish unpleasant odor released when you're physically active. It also showed an excellent performance to detect a hazardous gas like ammonia (NH3) with sensitivity as low as 5 ppm.

Eberly, Wayne.  2016.  Selecting Algorithms for Black Box Matrices: Checking For Matrix Properties That Can Simplify Computations. Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation. :207–214.

Processes to automate the selection of appropriate algorithms for various matrix computations are described. In particular, processes to check for, and certify, various matrix properties of black-box matrices are presented. These include sparsity patterns and structural properties that allow "superfast" algorithms to be used in place of black-box algorithms. Matrix properties that hold generically, and allow the use of matrix preconditioning to be reduced or eliminated, can also be checked for and certified –- notably including in the small-field case, where this presently has the greatest impact on the efficiency of the computation.

Ebert, David S..  2019.  Visual Spatial Analytics and Trusted Information for Effective Decision Making. Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. :2.

Information, not just data, is key to today's global challenges. To solve these challenges requires not only advancing geospatial and big data analytics but requires new analysis and decision-making environments that enable reliable decisions from trustable, understandable information that go beyond current approaches to machine learning and artificial intelligence. These environments are successful when they effectively couple human decision making with advanced, guided spatial analytics in human-computer collaborative discourse and decision making (HCCD). Our HCCD approach builds upon visual analytics, natural scale templates, traceable information, human-guided analytics, and explainable and interactive machine learning, focusing on empowering the decisionmaker through interactive visual spatial analytic environments where non-digital human expertise and experience can be combined with state-of-the-art and transparent analytical techniques. When we combine this approach with real-world application-driven research, not only does the pace of scientific innovation accelerate, but impactful change occurs. I'll describe how we have applied these techniques to challenges in sustainability, security, resiliency, public safety, and disaster management.

Eberz, Simon, Rasmussen, Kasper B., Lenders, Vincent, Martinovic, Ivan.  2017.  Evaluating Behavioral Biometrics for Continuous Authentication: Challenges and Metrics. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :386–399.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
Eberz, Simon, Rasmussen, Kasper B., Lenders, Vincent, Martinovic, Ivan.  2017.  Evaluating Behavioral Biometrics for Continuous Authentication: Challenges and Metrics. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :386–399.

In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.

Ebrahimabadi, Mohammad, Younis, Mohamed, Lalouani, Wassila, Karimi, Naghmeh.  2022.  An Attack Resilient PUF-based Authentication Mechanism for Distributed Systems. 2022 35th International Conference on VLSI Design and 2022 21st International Conference on Embedded Systems (VLSID). :108–113.
In most PUF-based authentication schemes, a central server is usually engaged to verify the response of the device’s PUF to challenge bit-streams. However, the server availability may be intermittent in practice. To tackle such an issue, this paper proposes a new protocol for supporting distributed authentication while avoiding vulnerability to information leakage where CRPs could be retrieved from hacked devices and collectively used to model the PUF. The main idea is to provision for scrambling the challenge bit-stream in a way that is dependent on the verifier. The scrambling pattern varies per authentication round for each device and independently across devices. In essence, the scrambling function becomes node- and packetspecific and the response received by two verifiers of one device for the same challenge bit-stream could vary. Thus, neither the scrambling function can be reverted, nor the PUF can be modeled even by a collusive set of malicious nodes. The validation results using data of an FPGA-based implementation demonstrate the effectiveness of our approach in thwarting PUF modeling attacks by collusive actors. We also discuss the approach resiliency against impersonation, Sybil, and reverse engineering attacks.
Ebrahimabadi, Mohammad, Younis, Mohamed, Lalouani, Wassila, Karimi, Naghmeh.  2021.  A Novel Modeling-Attack Resilient Arbiter-PUF Design. 2021 34th International Conference on VLSI Design and 2021 20th International Conference on Embedded Systems (VLSID). :123–128.
Physically Unclonable Functions (PUFs) have been considered as promising lightweight primitives for random number generation and device authentication. Thanks to the imperfections occurring during the fabrication process of integrated circuits, each PUF generates a unique signature which can be used for chip identification. Although supposed to be unclonable, PUFs have been shown to be vulnerable to modeling attacks where a set of collected challenge response pairs are used for training a machine learning model to predict the PUF response to unseen challenges. Challenge obfuscation has been proposed to tackle the modeling attacks in recent years. However, knowing the obfuscation algorithm can help the adversary to model the PUF. This paper proposes a modeling-resilient arbiter-PUF architecture that benefits from the randomness provided by PUFs in concealing the obfuscation scheme. The experimental results confirm the effectiveness of the proposed structure in countering PUF modeling attacks.
Ebrahimi, M., Samtani, S., Chai, Y., Chen, H..  2020.  Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach. 2020 IEEE Security and Privacy Workshops (SPW). :20—26.

The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.

Ebrahimi, Najme, Yektakhah, Behzad, Sarabandi, Kamal, Kim, Hun Seok, Wentzloff, David, Blaauw, David.  2019.  A Novel Physical Layer Security Technique Using Master-Slave Full Duplex Communication. 2019 IEEE MTT-S International Microwave Symposium (IMS). :1096—1099.
In this work we present a novel technique for physical layer security in the Internet-of-Things (IoT) networks. In the proposed architecture, each IoT node generates a phase-modulated random key/data and transmits it to a master node in the presence of an eavesdropper, referred to as Eve. The master node, simultaneously, broadcasts a high power signal using an omni-directional antenna, which is received as interference by Eve. This interference masks the generated key by the IoT node and will result in a higher bit-error rate in the data received by Eve. The two legitimate intended nodes communicate in a full-duplex manner and, consequently, subtract their transmitted signals, as a known reference, from the received signal (self-interference cancellation). We compare our proposed method with a conventional approach to physical layer security based on directional antennas. In particular, we show, using theoretical and measurement results, that our proposed approach provides significantly better security measures, in terms bit error rate (BER) at Eve's location. Also, it is proven that in our novel system, the possible eavesdropping region, defined by the region with BER \textbackslashtextless; 10-1, is always smaller than the reliable communication region with BER \textbackslashtextless; 10-3.
Ebrahimian, Mahsa, Kashef, Rasha.  2020.  Efficient Detection of Shilling’s Attacks in Collaborative Filtering Recommendation Systems Using Deep Learning Models. 2020 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). :460–464.
Recommendation systems, especially collaborative filtering recommenders, are vulnerable to shilling attacks as some profit-driven users may inject fake profiles into the system to alter recommendation outputs. Current shilling attack detection methods are mostly based on feature extraction techniques. The hand-designed features can confine the model to specific domains or datasets while deep learning techniques enable us to derive deeper level features, enhance detection performance, and generalize the solution on various datasets and domains. This paper illustrates the application of two deep learning methods to detect shilling attacks. We conducted experiments on the MovieLens 100K and Netflix Dataset with different levels of attacks and types. Experimental results show that deep learning models can achieve an accuracy of up to 99%.
Ecik, Harun.  2021.  Comparison of Active Vulnerability Scanning vs. Passive Vulnerability Detection. 2021 International Conference on Information Security and Cryptology (ISCTURKEY). :87–92.
Vulnerability analysis is an integral part of an overall security program. Through identifying known security flaws and weaknesses, vulnerability identification tools help security practitioners to remediate the existing vulnerabilities on the networks. Thus, it is crucial that the results of such tools are complete, accurate, timely and they produce vulnerability results with minimum or no side-effects on the networks. To achieve these goals, Active Vulnerability Scanning (AVS) or Passive Vulnerability Detection (PVD) approaches can be used by network-based vulnerability scanners. In this work, we evaluate these two approaches with respect to efficiency and effectiveness. For the effectiveness analysis, we compare these two approaches empirically on a test environment and evaluate their outcomes. According to total amount of accuracy and precision, the PVD results are higher than AVS. As a result of our analysis, we conclude that PVD returns more complete and accurate results with considerably shorter scanning periods and with no side-effects on networks, compared to the AVS.
Eckard Böde, Matthias Büker, Ulrich Eberle, Martin Fränzle, Sebastian Gerwinn, Birte Kramer.  2018.  Efficient Splitting of Test and Simulation Cases for the Verification of Highly Automated Driving Functions. Computer Safety, Reliability, and Security. :139-153.
We address the question of feasibility of tests to verify highly automated driving functions by optimizing the trade-off between virtual tests for verifying safety properties and physical tests for validating the models used for such verification. We follow a quantitative approach based on a probabilistic treatment of the different quantities in question. That is, we quantify the accuracy of a model in terms of its probabilistic prediction ability. Similarly, we quantify the compliance of a system with its requirements in terms of the probability of satisfying these requirements. Depending on the costs of an individual virtual and physical test we are then able to calculate an optimal trade-off between physical and virtual tests, yet guaranteeing a probability of satisfying all requirements.