Visible to the public Biblio

Found 1140 results

Filters: First Letter Of Title is E  [Clear All Filters]
2020-03-18
Zkik, Karim, Sebbar, Anass, Baadi, Youssef, Belhadi, Amine, Boulmalf, Mohammed.  2019.  An efficient modular security plane AM-SecP for hybrid distributed SDN. 2019 International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). :354–359.

Software defined networks (SDNs) represent new centralized network architecture that facilitates the deployment of services, applications and policies from the upper layers, relatively the management and control planes to the lower layers the data plane and the end user layer. SDNs give several advantages in terms of agility and flexibility, especially for mobile operators and for internet service providers. However, the implementation of these types of networks faces several technical challenges and security issues. In this paper we will focus on SDN's security issues and we will propose the implementation of a centralized security layer named AM-SecP. The proposed layer is linked vertically to all SDN layers which ease packets inspections and detecting intrusions. The purpose of this architecture is to stop and to detect malware infections, we do this by denying services and tunneling attacks without encumbering the networks by expensive operations and high calculation cost. The implementation of the proposed framework will be also made to demonstrate his feasibility and robustness.

2020-03-16
Singh, Rina, Graves, Jeffrey A., Anantharaj, Valentine, Sukumar, Sreenivas R..  2019.  Evaluating Scientific Workflow Engines for Data and Compute Intensive Discoveries. 2019 IEEE International Conference on Big Data (Big Data). :4553–4560.
Workflow engines used to script scientific experiments involving numerical simulation, data analysis, instruments, edge sensors, and artificial intelligence have to deal with the complexities of hardware, software, resource availability, and the collaborative nature of science. In this paper, we survey workflow engines used in data-intensive and compute-intensive discovery pipelines from scientific disciplines such as astronomy, high energy physics, earth system science, bio-medicine, and material science and present a qualitative analysis of their respective capabilities. We compare 5 popular workflow engines and their differentiated approach to job orchestration, job launching, data management and provenance, security authentication, ease-ofuse, workflow description, and scripting semantics. The comparisons presented in this paper allow practitioners to choose the appropriate engine for their scientific experiment and lead to recommendations for future work.
2020-03-09
Gope, Prosanta, Sikdar, Biplab.  2018.  An Efficient Privacy-Preserving Dynamic Pricing-Based Billing Scheme for Smart Grids. 2018 IEEE Conference on Communications and Network Security (CNS). :1–2.

This paper proposes a lightweight and privacy-preserving data aggregation scheme for dynamic electricity pricing based billing in smart grids using the concept of single-pass authenticated encryption (AE). Unlike existing literature that only considers static pricing, to the best of our knowledge, this is the first paper to address privacy under dynamic pricing.

Song, Zekun, Wang, Yichen, Zong, Pengyang, Ren, Zhiwei, Qi, Di.  2019.  An Empirical Study of Comparison of Code Metric Aggregation Methods–on Embedded Software. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :114–119.

How to evaluate software reliability based on historical data of embedded software projects is one of the problems we have to face in practical engineering. Therefore, we establish a software reliability evaluation model based on code metrics. This evaluation technique requires the aggregation of software code metrics into project metrics. Statistical value methods, metric distribution methods, and econometric methods are commonly-used aggregation methods. What are the differences between these methods in the software reliability evaluation process, and which methods can improve the accuracy of the reliability assessment model we have established are our concerns. In view of these concerns, we conduct an empirical study on the application of software code metric aggregation methods based on actual projects. We find the distribution of code metrics for the projects under study. Using these distribution laws, we optimize the aggregation method of code metrics and improve the accuracy of the software reliability evaluation model.

Tun, Hein, Lupin, Sergey, Than, Ba Hla, Nay Zaw Linn, Kyaw, Khaing, Min Thu.  2019.  Estimation of Information System Security Using Hybrid Simulation in AnyLogic. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1829–1834.
Nowadays the role of Information systems in our life has greatly increased, which has become one of the biggest challenges for citizens, organizations and governments. Every single day we are becoming more and more dependent on information and communication technology (ICT). A major goal of information security is to find the best ways to mitigate the risks. The context-role and perimeter protection approaches can reduce and prevent an unauthorized penetration to protected zones and information systems inside the zones. The result of this work can be useful for the security system analysis and optimization of their organizations.
Patil, Jagruti M., Chaudhari, Sangita S..  2019.  Efficient Privacy Preserving and Dynamic Public Auditing for Storage Cloud. 2019 International Conference on Nascent Technologies in Engineering (ICNTE). :1–6.
In recent years, cloud computing has gained lots of importance and is being used in almost all applications in terms of various services. One of the most widely used service is storage as a service. Even though the stored data can be accessed from anytime and at any place, security of such data remains a prime concern of storage server as well as data owner. It may possible that the stored data can be altered or deleted. Therefore, it is essential to verify the correctness of data (auditing) and an agent termed as Third Party Auditor (TPA) can be utilised to do so. Existing auditing approaches have their own strengths and weakness. Hence, it is essential to propose auditing scheme which eliminates limitations of existing auditing mechanisms. Here we are proposing public auditing scheme which supports data dynamics as well as preserves privacy. Data owner, TPA, and cloud server are integral part of any auditing mechanism. Data in the form of various blocks are encoded, hashed, concatenated and then signature is calculated on it. This scheme also supports data dynamics in terms of addition, modification and deletion of data. TPA reads encoded data from cloud server and perform hashing, merging and signature calculation for checking correctness of data. In this paper, we have proposed efficient privacy preserving and dynamic public auditing by utilizing Merkle Hash Tree (MHT) for indexing of encoded data. It allows updating of data dynamically while preserving data integrity. It supports data dynamics operations like insert, modify and deletion. Several users can request for storage correctness simultaneously and it will be efficiently handled in the proposed scheme. It also minimizes the communication and computing cost. The proposed auditing scheme is experimented and results are evaluated considering various block size and file size parameters.
2020-03-02
Fu, Rao, Grinberg, Ilya, Gogolyuk, Petro.  2019.  Electric Power Distribution System Fault Recovery Based on Visual Computation. 2019 IEEE 20th International Conference on Computational Problems of Electrical Engineering (CPEE). :1–4.

A study case of electric power distribution system fault recovery has been introduced in this article. With proper connections, network reconfiguration should be considered an effective solution to the system fault condition. Considering the radial structure of the distribution system, appropriate observation on visualized outcome of the voltage profile can lead the system operator to obtain the best switching line effectively. Contour plots are applied for visualizing the voltage profiles of a modified IEEE 13-node test feeder model.

Alioto, Massimo, Taneja, Sachin.  2019.  Enabling Ubiquitous Hardware Security via Energy-Efficient Primitives and Systems : (Invited Paper). 2019 IEEE Custom Integrated Circuits Conference (CICC). :1–8.
Security down to hardware (HW) has become a fundamental requirement in highly-connected and ubiquitously deployed systems, as a result of the recent discovery of a wide range of vulnerabilities in commercial devices, as well as the affordability of several attacks that were traditionally considered unlikely. HW security is now a fundamental requirement in view of the massive attack surface that they expose, and the substantial power penalty entailed by solutions at higher levels of abstraction.In large-scale networks of connected devices, attacks need to be counteracted at low cost down to individual nodes, which need to be identified or authenticated securely, and protect confidentiality and integrity of the data that is sensed, stored, processed and wirelessly exchanged. In many security-sensitive applications, physical attacks against individual chips need to be counteracted to truly enable an end-to-end chain of trust from nodes to cloud and actuation (i.e., always-on security). These requirements have motivated the on-going global research and development effort to assure hardware security at low cost and power penalty down to low-end devices (i.e., ubiquitous security).This paper provides a fresh overview of the fundamentals, the design requirements and the state of the art in primitives for HW security. Challenges and future directions are discussed using recent silicon demonstrations as case studies.
Zhao, Zhijun, Jiang, Zhengwei, Wang, Yueqiang, Chen, Guoen, Li, Bo.  2019.  Experimental Verification of Security Measures in Industrial Environments. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). :498–502.
Industrial Control Security (ICS) plays an important role in protecting Industrial assets and processed from being tampered by attackers. Recent years witness the fast development of ICS technology. However there are still shortage of techniques and measures to verify the effectiveness of ICS approaches. In this paper, we propose a verification framework named vICS, for security measures in industrial environments. vICS does not requires installing any agent in industrial environments, and could be viewed as a non-intrusive way. We use vICS to evaluate the effectiveness of classic ICS techniques and measures through several experiments. The results shown that vICS provide an feasible solution for verifying the effectiveness of classic ICS techniques and measures for industrial environments.
2020-02-26
Dong, Jiaojiao, Zhu, Lin, Liu, Yilu, Rizy, D. Tom.  2019.  Enhancing Distribution System Monitoring and Resiliency: A Sensor Placement Optimization Tool (SPOT). 2019 IEEE Power Energy Society General Meeting (PESGM). :1–5.

Optimal placement of new sensors is of great importance to enhancing distribution system monitoring and resiliency. Utilities are in need of a platform for an optimal sensor placement strategy other than the traditional experience-based strategy. In this paper, a sensor placement optimization tool (SPOT) is developed. It contains two selected modules based on industry priority: distribution system state estimation (DSE) and recloser placement (RP). The DSE module incorporates three-phase system functionality to reflect practical distribution systems with asymmetrical topology and unbalanced loading. In the RP module, the impact of microgrids is modeled. SPOT is timely since it can assist utilities in developing their own optimal sensor allocation strategies.

Sabbagh, Majid, Gongye, Cheng, Fei, Yunsi, Wang, Yanzhi.  2019.  Evaluating Fault Resiliency of Compressed Deep Neural Networks. 2019 IEEE International Conference on Embedded Software and Systems (ICESS). :1–7.

Model compression is considered to be an effective way to reduce the implementation cost of deep neural networks (DNNs) while maintaining the inference accuracy. Many recent studies have developed efficient model compression algorithms and implementations in accelerators on various devices. Protecting integrity of DNN inference against fault attacks is important for diverse deep learning enabled applications. However, there has been little research investigating the fault resilience of DNNs and the impact of model compression on fault tolerance. In this work, we consider faults on different data types and develop a simulation framework for understanding the fault resiliency of compressed DNN models as compared to uncompressed models. We perform our experiments on two common DNNs, LeNet-5 and VGG16, and evaluate their fault resiliency with different types of compression. The results show that binary quantization can effectively increase the fault resilience of DNN models by 10000x for both LeNet5 and VGG16. Finally, we propose software and hardware mitigation techniques to increase the fault resiliency of DNN models.

Wang, Yuze, Han, Tao, Han, Xiaoxia, Liu, Peng.  2019.  Ensemble-Learning-Based Hardware Trojans Detection Method by Detecting the Trigger Nets. 2019 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.

With the globalization of integrated circuit (IC) design and manufacturing, malicious third-party vendors can easily insert hardware Trojans into their intellect property (IP) cores during IC design phase, threatening the security of IC systems. It is strongly required to develop hardware-Trojan detection methods especially for the IC design phase. As the particularity of Trigger nets in Trojan circuits, in this paper, we propose an ensemble-learning-based hardware-Trojan detection method by detecting the Trigger nets at the gate level. We extract the Trigger-net features for each net from known netlists and use the ensemble learning method to train two detection models according to the Trojan types. The detection models are used to identify suspicious Trigger nets in an unknown detected netlist and give results of suspiciousness values for each detected net. By flagging the top n% suspicious nets of each detection model as the suspicious Trigger nets based on the suspiciousness values, the proposed method can achieve, on average, 88% true positive rate, 90% true negative rate, and 90% Accuracy.

2020-02-24
Srivastava, Ankush, Ghosh, Prokash.  2019.  An Efficient Memory Zeroization Technique Under Side-Channel Attacks. 2019 32nd International Conference on VLSI Design and 2019 18th International Conference on Embedded Systems (VLSID). :76–81.
Protection of secured data content in volatile memories (processor caches, embedded RAMs etc) is essential in networking, wireless, automotive and other embedded secure applications. It is utmost important to protect secret data, like authentication credentials, cryptographic keys etc., stored over volatile memories which can be hacked during normal device operations. Several security attacks like cold boot, disclosure attack, data remanence, physical attack, cache attack etc. can extract the cryptographic keys or secure data from volatile memories of the system. The content protection of memory is typically done by assuring data deletion in minimum possible time to minimize data remanence effects. In today's state-of-the-art SoCs, dedicated hardwares are used to functionally erase the private memory contents in case of security violations. This paper, in general, proposes a novel approach of using existing memory built-in-self-test (MBIST) hardware to zeroize (initialize memory to all zeros) on-chip memory contents before it is being hacked either through different side channels or secuirty attacks. Our results show that the proposed MBIST based content zeroization approach is substantially faster than conventional techniques. By adopting the proposed approach, functional hardware requirement for memory zeroization can be waived.
2020-02-18
Zheng, Jianjun, Siami Namin, Akbar.  2019.  Enforcing Optimal Moving Target Defense Policies. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:753–759.
This paper introduces an approach based on control theory to model, analyze and select optimal security policies for Moving Target Defense (MTD) deployment strategies. A Markov Decision Process (MDP) scheme is presented to model states of the system from attacking point of view. The employed value iteration method is based on the Bellman optimality equation for optimal policy selection for each state defined in the system. The model is then utilized to analyze the impact of various costs on the optimal policy. The MDP model is then applied to two case studies to evaluate the performance of the model.
Talluri, Sacheendra, Iosup, Alexandru.  2019.  Efficient Estimation of Read Density When Caching for Big Data Processing. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :502–507.

Big data processing systems are becoming increasingly more present in cloud workloads. Consequently, they are starting to incorporate more sophisticated mechanisms from traditional database and distributed systems. We focus in this work on the use of caching policies, which for big data raise important new challenges. Not only they must respond to new variants of the trade-off between hit rate, response time, and the space consumed by the cache, but they must do so at possibly higher volume and velocity than web and database workloads. Previous caching policies have not been tested experimentally with big data workloads. We address these challenges in this work. We propose the Read Density family of policies, which is a principled approach to quantify the utility of cached objects through a family of utility functions that depend on the frequency of reads of an object. We further design the Approximate Histogram, which is a policy-based technique based on an array of counters. This technique promises to achieve runtime-space efficient computation of the metric required by the cache policy. We evaluate through trace-based simulation the caching policies from the Read Density family, and compare them with over ten state-of-the-art alternatives. We use two workload traces representative for big data processing, collected from commercial Spark and MapReduce deployments. While we achieve comparable performance to the state-of-art with less parameters, meaningful performance improvement for big data workloads remain elusive.

Zhang, Detian, Liu, An, Jin, Gaoming, Li, Qing.  2019.  Edge-Based Shortest Path Caching for Location-Based Services. 2019 IEEE International Conference on Web Services (ICWS). :320–327.

Shortest path queries on road networks are widely used in location-based services (LBS), e.g., finding the shortest route from my home to the airport through Google Maps. However, when there are a large number of path queries arrived concurrently or in a short while, an LBS provider (e.g., Google Maps) has to endure a high workload and then may lead to a long response time to users. Therefore, path caching services are utilized to accelerate large-scale path query processing, which try to store the historical path results and reuse them to answer the coming queries directly. However, most of existing path caches are organized based on nodes of paths; hence, the underlying road network topology is still needed to answer a path query when its querying origin or destination lies on edges. To overcome this limitation, we propose an edge-based shortest path cache in this paper that can efficiently handle queries without needing any road information, which is much more practical in the real world. We achieve this by designing a totally new edge-based path cache structure, an efficient R-tree-based cache lookup algorithm, and a greedy-based cache construction algorithm. Extensive experiments on a real road network and real point-of-interest datasets are conducted, and the results show the efficiency, scalability, and applicability of our proposed caching techniques.

Tung Hoang, Xuan, Dung Bui, Ngoc.  2019.  An Enhanced Semantic-Based Cache Replacement Algorithm for Web Systems. 2019 IEEE-RIVF International Conference on Computing and Communication Technologies (RIVF). :1–6.

As Web traffics is increasing on the Internet, caching solutions for Web systems are becoming more important since they can greatly expand system scalability. An important part of a caching solution is cache replacement policy, which is responsible for selecting victim items that should be removed in order to make space for new objects. Typical replacement policies used in practice only take advantage of temporal reference locality by removing the least recently/frequently requested items from the cache. Although those policies work well in memory or filesystem cache, they are inefficient for Web systems since they do not exploit semantic relationship between Web items. This paper presents a semantic-aware caching policy that can be used in Web systems to enhance scalability. The proposed caching mechanism defines semantic distance from a web page to a set of pivot pages and use the semantic distances as a metric for choosing victims. Also, it use a function-based metric that combines access frequency and cache item size for tie-breaking. Our simulations show that out enhancements outperform traditional methods in terms of hit rate, which can be useful for websites with many small and similar-in-size web objects.

2020-02-17
Eckhart, Matthias, Ekelhart, Andreas, Weippl, Edgar.  2019.  Enhancing Cyber Situational Awareness for Cyber-Physical Systems through Digital Twins. 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). :1222–1225.
Operators of cyber-physical systems (CPSs) need to maintain awareness of the cyber situation in order to be able to adequately address potential issues in a timely manner. For instance, detecting early symptoms of cyber attacks may speed up the incident response process and mitigate consequences of attacks (e.g., business interruption, safety hazards). However, attaining a full understanding of the cyber situation may be challenging, given the complexity of CPSs and the ever-changing threat landscape. In particular, CPSs typically need to be continuously operational, may be sensitive to active scanning, and often provide only limited in-depth analysis capabilities. To address these challenges, we propose to utilize the concept of digital twins for enhancing cyber situational awareness. Digital twins, i.e., virtual replicas of systems, can run in parallel to their physical counterparts and allow deep inspection of their behavior without the risk of disrupting operational technology services. This paper reports our work in progress to develop a cyber situational awareness framework based on digital twins that provides a profound, holistic, and current view on the cyber situation that CPSs are in. More specifically, we present a prototype that provides real-time visualization features (i.e., system topology, program variables of devices) and enables a thorough, repeatable investigation process on a logic and network level. A brief explanation of technological use cases and outlook on future development efforts completes this work.
Sharma, Aditya, Jain, Aaditya, Sharma, Ila.  2019.  Exposing the Security Weaknesses of Fifth Generation Handover Communication. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–6.
With the development of Fifth Generation (5G) mobile telecommunication technology, the Third Generation Partnership Project (3GPP) is attempting to fulfill the increasing security demands of IoT-based applications. 3GPP has published the study report of the 5G handover architecture and security functions. In this work, we discuss the 5G handover key mechanism with its key hierarchy. In addition, the Xn-based, N2-based intra/inter AMF handover mechanism in 5G communication network is analyzed and identify the security weaknesses such as false base-station and Denial-of-Service (DoS) attack. Moreover, the handover mechanism suffers from authentication complexity due to high bandwidth consumption. From these security issues, all the future session keys will be compromised and secure connection between mobile/ user equipment and target basestation will not be established.
Papakonstantinou, Nikolaos, Linnosmaa, Joonas, Alanen, Jarmo, Bashir, Ahmed Z., O'Halloran, Bryan, Van Bossuyt, Douglas L..  2019.  Early Hybrid Safety and Security Risk Assessment Based on Interdisciplinary Dependency Models. 2019 Annual Reliability and Maintainability Symposium (RAMS). :1–7.
Safety and security of complex critical infrastructures are very important for economic, environmental and social reasons. The complexity of these systems introduces difficulties in the identification of safety and security risks that emerge from interdisciplinary interactions and dependencies. The discovery of safety and security design weaknesses late in the design process and during system operation can lead to increased costs, additional system complexity, delays and possibly undesirable compromises to address safety and security weaknesses.
Fett, Daniel, Hosseyni, Pedram, Küsters, Ralf.  2019.  An Extensive Formal Security Analysis of the OpenID Financial-Grade API. 2019 IEEE Symposium on Security and Privacy (SP). :453–471.
Forced by regulations and industry demand, banks worldwide are working to open their customers' online banking accounts to third-party services via web-based APIs. By using these so-called Open Banking APIs, third-party companies, such as FinTechs, are able to read information about and initiate payments from their users' bank accounts. Such access to financial data and resources needs to meet particularly high security requirements to protect customers. One of the most promising standards in this segment is the OpenID Financial-grade API (FAPI), currently under development in an open process by the OpenID Foundation and backed by large industry partners. The FAPI is a profile of OAuth 2.0 designed for high-risk scenarios and aiming to be secure against very strong attackers. To achieve this level of security, the FAPI employs a range of mechanisms that have been developed to harden OAuth 2.0, such as Code and Token Binding (including mTLS and OAUTB), JWS Client Assertions, and Proof Key for Code Exchange. In this paper, we perform a rigorous, systematic formal analysis of the security of the FAPI, based on an existing comprehensive model of the web infrastructure - the Web Infrastructure Model (WIM) proposed by Fett, Küsters, and Schmitz. To this end, we first develop a precise model of the FAPI in the WIM, including different profiles for read-only and read-write access, different flows, different types of clients, and different combinations of security features, capturing the complex interactions in a web-based environment. We then use our model of the FAPI to precisely define central security properties. In an attempt to prove these properties, we uncover partly severe attacks, breaking authentication, authorization, and session integrity properties. We develop mitigations against these attacks and finally are able to formally prove the security of a fixed version of the FAPI. Although financial applications are high-stakes environments, this work is the first to formally analyze and, importantly, verify an Open Banking security profile. By itself, this analysis is an important contribution to the development of the FAPI since it helps to define exact security properties and attacker models, and to avoid severe security risks before the first implementations of the standard go live. Of independent interest, we also uncover weaknesses in the aforementioned security mechanisms for hardening OAuth 2.0. We illustrate that these mechanisms do not necessarily achieve the security properties they have been designed for.
Hassan, Mehmood, Mansoor, Khwaja, Tahir, Shahzaib, Iqbal, Waseem.  2019.  Enhanced Lightweight Cloud-assisted Mutual Authentication Scheme for Wearable Devices. 2019 International Conference on Applied and Engineering Mathematics (ICAEM). :62–67.
With the emergence of IoT, wearable devices are drawing attention and becoming part of our daily life. These wearable devices collect private information about their wearers. Mostly, a secure authentication process is used to verify a legitimate user that relies on the mobile terminal. Similarly, remote cloud services are used for verification and authentication of both wearable devices and wearers. Security is necessary to preserve the privacy of users. Some traditional authentication protocols are proposed which have vulnerabilities and are prone to different attacks like forgery, de-synchronization, and un-traceability issues. To address these vulnerabilities, recently, Wu et al. (2017) proposed a cloud-assisted authentication scheme which is costly in terms of computations required. Therefore this paper proposed an improved, lightweight and computationally efficient authentication scheme for wearable devices. The proposed scheme provides similar level of security as compared to Wu's (2017) scheme but requires 41.2% lesser computations.
Yang, Chen, Liu, Tingting, Zuo, Lulu, Hao, Zhiyong.  2019.  An Empirical Study on the Data Security and Privacy Awareness to Use Health Care Wearable Devices. 2019 16th International Conference on Service Systems and Service Management (ICSSSM). :1–6.
Recently, several health care wearable devices which can intervene in health and collect personal health data have emerged in the medical market. Although health care wearable devices promote the integration of multi-layer medical resources and bring new ways of health applications for users, it is inevitable that some problems will be brought. This is mainly manifested in the safety protection of medical and health data and the protection of user's privacy. From the users' point of view, the irrational use of medical and health data may bring psychological and physical negative effects to users. From the government's perspective, it may be sold by private businesses in the international arena and threaten national security. The most direct precaution against the problem is users' initiative. For better understanding, a research model is designed by the following five aspects: Security knowledge (SK), Security attitude (SAT), Security practice (SP), Security awareness (SAW) and Security conduct (SC). To verify the model, structural equation analysis which is an empirical approach was applied to examine the validity and all the results showed that SK, SAT, SP, SAW and SC are important factors affecting users' data security and privacy protection awareness.
Wen, Jinming, Yu, Wei.  2019.  Exact Sparse Signal Recovery via Orthogonal Matching Pursuit with Prior Information. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :5003–5007.
The orthogonal matching pursuit (OMP) algorithm is a commonly used algorithm for recovering K-sparse signals x ∈ ℝn from linear model y = Ax, where A ∈ ℝm×n is a sensing matrix. A fundamental question in the performance analysis of OMP is the characterization of the probability that it can exactly recover x for random matrix A. Although in many practical applications, in addition to the sparsity, x usually also has some additional property (for example, the nonzero entries of x independently and identically follow the Gaussian distribution), none of existing analysis uses these properties to answer the above question. In this paper, we first show that the prior distribution information of x can be used to provide an upper bound on \textbackslashtextbar\textbackslashtextbarx\textbackslashtextbar\textbackslashtextbar21/\textbackslashtextbar\textbackslashtextbarx\textbackslashtextbar\textbackslashtextbar22, and then explore the bound to develop a better lower bound on the probability of exact recovery with OMP in K iterations. Simulation tests are presented to illustrate the superiority of the new bound.
Moquin, S. J., Kim, SangYun, Blair, Nicholas, Farnell, Chris, Di, Jia, Mantooth, H. Alan.  2019.  Enhanced Uptime and Firmware Cybersecurity for Grid-Connected Power Electronics. 2019 IEEE CyberPELS (CyberPELS). :1–6.
A distributed energy resource prototype is used to show cybersecurity best practices. These best practices include straightforward security techniques, such as encrypted serial communication. The best practices include more sophisticated security techniques, such as a method to evaluate and respond to firmware integrity at run-time. The prototype uses embedded Linux, a hardware-assisted monitor, one or more digital signal processors, and grid-connected power electronics. Security features to protect communication, firmware, power flow, and hardware are developed. The firmware run-time integrity security is presently evaluated, and shown to maintain power electronics uptime during firmware updating. The firmware run-time security feature can be extended to allow software rejuvenation, multi-mission controls, and greater flexibility and security in controls.