Biblio

Found 364 results

Filters: Keyword is reliability  [Clear All Filters]
2017-03-07
Armin, J., Thompson, B., Ariu, D., Giacinto, G., Roli, F., Kijewski, P..  2015.  2020 Cybercrime Economic Costs: No Measure No Solution. 2015 10th International Conference on Availability, Reliability and Security. :701–710.

Governments needs reliable data on crime in order to both devise adequate policies, and allocate the correct revenues so that the measures are cost-effective, i.e., The money spent in prevention, detection, and handling of security incidents is balanced with a decrease in losses from offences. The analysis of the actual scenario of government actions in cyber security shows that the availability of multiple contrasting figures on the impact of cyber-attacks is holding back the adoption of policies for cyber space as their cost-effectiveness cannot be clearly assessed. The most relevant literature on the topic is reviewed to highlight the research gaps and to determine the related future research issues that need addressing to provide a solid ground for future legislative and regulatory actions at national and international levels.

2017-02-14
M. Ussath, F. Cheng, C. Meinel.  2015.  "Concept for a security investigation framework". 2015 7th International Conference on New Technologies, Mobility and Security (NTMS). :1-5.

The number of detected and analyzed Advanced Persistent Threat (APT) campaigns increased over the last years. Two of the main objectives of such campaigns are to maintain long-term access to the environment of the target and to stay undetected. To achieve these goals the attackers use sophisticated and customized techniques for the lateral movement, to ensure that these activities are not detected by existing security systems. During an investigation of an APT campaign all stages of it are relevant to clarify important details like the initial infection vector or the compromised systems and credentials. Most of the currently used approaches, which are utilized within security systems, are not able to detect the different stages of a complex attack and therefore a comprehensive security investigation is needed. In this paper we describe a concept for a Security Investigation Framework (SIF) that supports the analysis and the tracing of multi-stage APTs. The concept includes different automatic and semi-automatic approaches that support the investigation of such attacks. Furthermore, the framework leverages different information sources, like log files and details from forensic investigations and malware analyses, to give a comprehensive overview of the different stages of an attack. The overall objective of the SIF is to improve the efficiency of investigations and reveal undetected details of an attack.

2017-03-07
Bulbul, R., Ten, C. W., Wang, L..  2015.  Prioritization of MTTC-based combinatorial evaluation for hypothesized substations outages. 2015 IEEE Power Energy Society General Meeting. :1–5.

Exhaustive enumeration of a S-select-k problem for hypothesized substations outages can be practically infeasible due to exponential growth of combinations as both S and k numbers increase. This enumeration of worst-case substations scenarios from the large set, however, can be improved based on the initial selection sets with the root nodes and segments. In this paper, the previous work of the reverse pyramid model (RPM) is enhanced with prioritization of root nodes and defined segmentations of substation list based on mean-time-to-compromise (MTTC) value that is associated with each substation. Root nodes are selected based on the threshold values of the substation ranking on MTTC values and are segmented accordingly from the root node set. Each segmentation is then being enumerated with S-select-k module to identify worst-case scenarios. The lowest threshold value on the list, e.g., a substation with no assignment of MTTC or extremely low number, is completely eliminated. Simulation shows that this approach demonstrates similar outcome of the risk indices among all randomly generated MTTC of the IEEE 30-bus system.

2017-03-08
Li, Sihuan, Hu, Lihui.  2015.  Risk assessment of agricultural supply chain based on AHP- FCS in Eastern Area of Hunan Province. 2015 International Conference on Logistics, Informatics and Service Sciences (LISS). :1–6.

In recent years, The vulnerability of agricultural products chain is been exposed because of the endlessly insecure events appeared in every areas and every degrees from the natural disasters on the each node operation of agricultural products supply chain in recently years. As an very important place of HUNAN Province because of its abundant agricultural products, the Eastern Area's security in agricultural products supply chain was related to the safety and stability of economic development in the entire region. In order to make the more objective, scientific, practical of risk management in the empirical analysis, This item is based on the AHP-FCS method to deal with the qualitative to quantitative analysis about risk management of agricultural product supply chain, to identify and evaluate the probability and severity of all the risk possibility.

2017-02-21
Shuhao Liu, Baochun Li.  2015.  "On scaling software-Defined Networking in wide-area networks". Tsinghua Science and Technology. 20:221-232.

Software-Defined Networking (SDN) has emerged as a promising direction for next-generation network design. Due to its clean-slate and highly flexible design, it is believed to be the foundational principle for designing network architectures and improving their flexibility, resilience, reliability, and security. As the technology matures, research in both industry and academia has designed a considerable number of tools to scale software-defined networks, in preparation for the wide deployment in wide-area networks. In this paper, we survey the mechanisms that can be used to address the scalability issues in software-defined wide-area networks. Starting from a successful distributed system, the Domain Name System, we discuss the essential elements to make a large scale network infrastructure scalable. Then, the existing technologies proposed in the literature are reviewed in three categories: scaling out/up the data plane and scaling the control plane. We conclude with possible research directions towards scaling software-defined wide-area networks.

M. Moradi, F. Qian, Q. Xu, Z. M. Mao, D. Bethea, M. K. Reiter.  2015.  "Caesar: high-speed and memory-efficient forwarding engine for future internet architecture". 2015 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS). :171-182.

In response to the critical challenges of the current Internet architecture and its protocols, a set of so-called clean slate designs has been proposed. Common among them is an addressing scheme that separates location and identity with self-certifying, flat and non-aggregatable address components. Each component is long, reaching a few kilobits, and would consume an amount of fast memory in data plane devices (e.g., routers) that is far beyond existing capacities. To address this challenge, we present Caesar, a high-speed and length-agnostic forwarding engine for future border routers, performing most of the lookups within three fast memory accesses. To compress forwarding states, Caesar constructs scalable and reliable Bloom filters in Ternary Content Addressable Memory (TCAM). To guarantee correctness, Caesar detects false positives at high speed and develops a blacklisting approach to handling them. In addition, we optimize our design by introducing a hashing scheme that reduces the number of hash computations from k to log(k) per lookup based on hash coding theory. We handle routing updates while keeping filters highly utilized in address removals. We perform extensive analysis and simulations using real traffic and routing traces to demonstrate the benefits of our design. Our evaluation shows that Caesar is more energy-efficient and less expensive (in terms of total cost) compared to optimized IPv6 TCAM-based solutions by up to 67% and 43% respectively. In addition, the total cost of our design is approximately the same for various address lengths.

2020-03-09
Xie, Yuanpeng, Jiang, Yixin, Liao, Runfa, Wen, Hong, Meng, Jiaxiao, Guo, Xiaobin, Xu, Aidong, Guan, Zewu.  2015.  User Privacy Protection for Cloud Computing Based Smart Grid. 2015 IEEE/CIC International Conference on Communications in China - Workshops (CIC/ICCC). :7–11.

The smart grid aims to improve the efficiency, reliability and safety of the electric system via modern communication system, it's necessary to utilize cloud computing to process and store the data. In fact, it's a promising paradigm to integrate smart grid into cloud computing. However, access to cloud computing system also brings data security issues. This paper focuses on the protection of user privacy in smart meter system based on data combination privacy and trusted third party. The paper demonstrates the security issues for smart grid communication system and cloud computing respectively, and illustrates the security issues for the integration. And we introduce data chunk storage and chunk relationship confusion to protect user privacy. We also propose a chunk information list system for inserting and searching data.

2014-09-17
Biswas, Trisha, Lesser, Kendra, Dutta, Rudra, Oishi, Meeko.  2014.  Examining Reliability of Wireless Multihop Network Routing with Linear Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :19:1–19:2.

In this study, we present a control theoretic technique to model routing in wireless multihop networks. We model ad hoc wireless networks as stochastic dynamical systems where, as a base case, a centralized controller pre-computes optimal paths to the destination. The usefulness of this approach lies in the fact that it can help obtain bounds on reliability of end-to-end packet transmissions. We compare this approach with the reliability achieved by some of the widely used routing techniques in multihop networks.

2015-11-23
YoungMin Kwon, University of Illinois at Urbana-Champaign, Gul Agha, University of Illinois at Urbana-Champaign.  2014.  Performance Evaluation of Sensor Networks by Statistical Modeling and Euclidean Model Checking. ACM Transactions on Sensor Networks. 9(4)

Modeling and evaluating the performance of large-scale wireless sensor networks (WSNs) is a challenging problem. The traditional method for representing the global state of a system as a cross product of the states of individual nodes in the system results in a state space whose size is exponential in the number of nodes. We propose an alternative way of representing the global state of a system: namely, as a probability mass function (pmf) which represents the fraction of nodes in different states. A pmf corresponds to a point in a Euclidean space of possible pmf values, and the evolution of the state of a system is represented by trajectories in this Euclidean space. We propose a novel performance evaluation method that examines all pmf trajectories in a dense Euclidean space by exploring only finite relevant portions of the space. We call our method Euclidean model checking. Euclidean model checking is useful both in the design phase—where it can help determine system parameters based on a specification—and in the evaluation phase—where it can help verify performance properties of a system. We illustrate the utility of Euclidean model checking by using it to design a time difference of arrival (TDoA) distance measurement protocol and to evaluate the protocol’s implementation on a 90-node WSN. To facilitate such performance evaluations, we provide a Markov model estimation method based on applying a standard statistical estimation technique to samples resulting from the execution of a system.

2016-12-05
Eric Yuan, Naeem Esfahani, Sam Malek.  2014.  A Systematic Survey of Self-Protecting Software Systems. ACM Transactions on Autonomous and Adaptive Systems (TAAS) - Special Section on Best Papers from SEAMS 2012 . 8(4)

Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have been shown to be inadequate for the challenges posed by modern software systems. Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. This article presents a significant extension of our preliminary study in this area. In particular, unlike our preliminary study, here we have followed a systematic literature review process, which has broadened the scope of our study and strengthened the validity of our conclusions. By proposing and applying a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area, we have identified key patterns, trends and challenges in the existing approaches, which reveals a number of opportunities that will shape the focus of future research efforts.

2015-05-04
Fatemi Moghaddam, F., Varnosfaderani, S.D., Mobedi, S., Ghavam, I., Khaleghparast, R..  2014.  GD2SA: Geo detection and digital signature authorization for secure accessing to cloud computing environments. Computer Applications and Industrial Electronics (ISCAIE), 2014 IEEE Symposium on. :39-42.

Cloud computing is a new paradigm and emerged technology for hosting and delivering resources over a network such as internet by using concepts of virtualization, processing power and storage. However, many challenging issues are still unclear in cloud-based environments and decrease the rate of reliability and efficiency for service providers and users. User Authentication is one of the most challenging issues in cloud-based environments and according to this issue this paper proposes an efficient user authentication model that involves both of defined phases during registration and accessing processes. Geo Detection and Digital Signature Authorization (GD2SA) is a user authentication tool for provisional access permission in cloud computing environments. The main aim of GD2SA is to compare the location of an un-registered device with the location of the user by using his belonging devices (e.g. smart phone). In addition, this authentication algorithm uses the digital signature of account owner to verify the identity of applicant. This model has been evaluated in this paper according to three main parameters: efficiency, scalability, and security. In overall, the theoretical analysis of the proposed model showed that it can increase the rate of efficiency and reliability in cloud computing as an emerging technology.

2015-05-01
Guoyuan Lin, Danru Wang, Yuyu Bie, Min Lei.  2014.  MTBAC: A mutual trust based access control model in Cloud computing. Communications, China. 11:154-162.

As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.

2015-04-30
Hemalatha, A., Venkatesh, R..  2014.  Redundancy management in heterogeneous wireless sensor networks. Communications and Signal Processing (ICCSP), 2014 International Conference on. :1849-1853.

A Wireless sensor network is a special type of Ad Hoc network, composed of a large number of sensor nodes spread over a wide geographical area. Each sensor node has the wireless communication capability and sufficient intelligence for making signal processing and dissemination of data from the collecting center .In this paper deals about redundancy management for improving network efficiency and query reliability in heterogeneous wireless sensor networks. The proposed scheme deals about finding a reliable path by using redundancy management algorithm and detection of unreliable nodes by discarding the path. The redundancy management algorithm finds the reliable path based on redundancy level, average distance between a source node and destination node and analyzes the redundancy level as the path and source redundancy. For finding the path from source CH to processing center we propose intrusion tolerance in the presence of unreliable nodes. Finally we applied our analyzed result to redundancy management algorithm to find the reliable path in which the network efficiency and Query success probability will be improved.

2015-05-06
Silei Xu, Runhui Li, Lee, P.P.C., Yunfeng Zhu, Liping Xiang, Yinlong Xu, Lui, J.C.S..  2014.  Single Disk Failure Recovery for X-Code-Based Parallel Storage Systems. Computers, IEEE Transactions on. 63:995-1007.

In modern parallel storage systems (e.g., cloud storage and data centers), it is important to provide data availability guarantees against disk (or storage node) failures via redundancy coding schemes. One coding scheme is X-code, which is double-fault tolerant while achieving the optimal update complexity. When a disk/node fails, recovery must be carried out to reduce the possibility of data unavailability. We propose an X-code-based optimal recovery scheme called minimum-disk-read-recovery (MDRR), which minimizes the number of disk reads for single-disk failure recovery. We make several contributions. First, we show that MDRR provides optimal single-disk failure recovery and reduces about 25 percent of disk reads compared to the conventional recovery approach. Second, we prove that any optimal recovery scheme for X-code cannot balance disk reads among different disks within a single stripe in general cases. Third, we propose an efficient logical encoding scheme that issues balanced disk read in a group of stripes for any recovery algorithm (including the MDRR scheme). Finally, we implement our proposed recovery schemes and conduct extensive testbed experiments in a networked storage system prototype. Experiments indicate that MDRR reduces around 20 percent of recovery time of the conventional approach, showing that our theoretical findings are applicable in practice.

2015-05-04
Hauger, W.K., Olivier, M.S..  2014.  The role of triggers in database forensics. Information Security for South Africa (ISSA), 2014. :1-7.

An aspect of database forensics that has not received much attention in the academic research community yet is the presence of database triggers. Database triggers and their implementations have not yet been thoroughly analysed to establish what possible impact they could have on digital forensic analysis methods and processes. Conventional database triggers are defined to perform automatic actions based on changes in the database. These changes can be on the data level or the data definition level. Digital forensic investigators might thus feel that database triggers do not have an impact on their work. They are simply interrogating the data and metadata without making any changes. This paper attempts to establish if the presence of triggers in a database could potentially disrupt, manipulate or even thwart forensic investigations. The database triggers as defined in the SQL standard were studied together with a number of database trigger implementations. This was done in order to establish what aspects might have an impact on digital forensic analysis. It is demonstrated in this paper that some of the current database forensic analysis methods are impacted by the possible presence of certain types of triggers in a database. Furthermore, it finds that the forensic interpretation and attribution processes should be extended to include the handling and analysis of database triggers if they are present in a database.
 

2015-05-01
Sgouras, K.I., Birda, A.D., Labridis, D.P..  2014.  Cyber attack impact on critical Smart Grid infrastructures. Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES. :1-5.

Electrical Distribution Networks face new challenges by the Smart Grid deployment. The required metering infrastructures add new vulnerabilities that need to be taken into account in order to achieve Smart Grid functionalities without considerable reliability trade-off. In this paper, a qualitative assessment of the cyber attack impact on the Advanced Metering Infrastructure (AMI) is initially attempted. Attack simulations have been conducted on a realistic Grid topology. The simulated network consisted of Smart Meters, routers and utility servers. Finally, the impact of Denial-of-Service and Distributed Denial-of-Service (DoS/DDoS) attacks on distribution system reliability is discussed through a qualitative analysis of reliability indices.

2015-05-06
Sgouras, K.I., Birda, A.D., Labridis, D.P..  2014.  Cyber attack impact on critical Smart Grid infrastructures. Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES. :1-5.

Electrical Distribution Networks face new challenges by the Smart Grid deployment. The required metering infrastructures add new vulnerabilities that need to be taken into account in order to achieve Smart Grid functionalities without considerable reliability trade-off. In this paper, a qualitative assessment of the cyber attack impact on the Advanced Metering Infrastructure (AMI) is initially attempted. Attack simulations have been conducted on a realistic Grid topology. The simulated network consisted of Smart Meters, routers and utility servers. Finally, the impact of Denial-of-Service and Distributed Denial-of-Service (DoS/DDoS) attacks on distribution system reliability is discussed through a qualitative analysis of reliability indices.
 

2015-05-01
Farzan, F., Jafari, M.A., Wei, D., Lu, Y..  2014.  Cyber-related risk assessment and critical asset identification in power grids. Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES. :1-5.

This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.

Farzan, F., Jafari, M.A., Wei, D., Lu, Y..  2014.  Cyber-related risk assessment and critical asset identification in power grids. Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES. :1-5.

This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.

2015-05-06
Farzan, F., Jafari, M.A., Wei, D., Lu, Y..  2014.  Cyber-related risk assessment and critical asset identification in power grids. Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES. :1-5.

This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.
 

Hardy, T.L..  2014.  Resilience: A holistic safety approach. Reliability and Maintainability Symposium (RAMS), 2014 Annual. :1-6.

Decreasing the potential for catastrophic consequences poses a significant challenge for high-risk industries. Organizations are under many different pressures, and they are continuously trying to adapt to changing conditions and recover from disturbances and stresses that can arise from both normal operations and unexpected events. Reducing risks in complex systems therefore requires that organizations develop and enhance traits that increase resilience. Resilience provides a holistic approach to safety, emphasizing the creation of organizations and systems that are proactive, interactive, reactive, and adaptive. This approach relies on disciplines such as system safety and emergency management, but also requires that organizations develop indicators and ways of knowing when an emergency is imminent. A resilient organization must be adaptive, using hands-on activities and lessons learned efforts to better prepare it to respond to future disruptions. It is evident from the discussions of each of the traits of resilience, including their limitations, that there are no easy answers to reducing safety risks in complex systems. However, efforts to strengthen resilience may help organizations better address the challenges associated with the ever-increasing complexities of their systems.

Holm, H..  2014.  Signature Based Intrusion Detection for Zero-Day Attacks: (Not) A Closed Chapter? System Sciences (HICSS), 2014 47th Hawaii International Conference on. :4895-4904.

A frequent claim that has not been validated is that signature based network intrusion detection systems (SNIDS) cannot detect zero-day attacks. This paper studies this property by testing 356 severe attacks on the SNIDS Snort, configured with an old official rule set. Of these attacks, 183 attacks are zero-days' to the rule set and 173 attacks are theoretically known to it. The results from the study show that Snort clearly is able to detect zero-days' (a mean of 17% detection). The detection rate is however on overall greater for theoretically known attacks (a mean of 54% detection). The paper then investigates how the zero-days' are detected, how prone the corresponding signatures are to false alarms, and how easily they can be evaded. Analyses of these aspects suggest that a conservative estimate on zero-day detection by Snort is 8.2%.

2015-04-30
Kirsch, J., Goose, S., Amir, Y., Dong Wei, Skare, P..  2014.  Survivable SCADA Via Intrusion-Tolerant Replication. Smart Grid, IEEE Transactions on. 5:60-70.

Providers of critical infrastructure services strive to maintain the high availability of their SCADA systems. This paper reports on our experience designing, architecting, and evaluating the first survivable SCADA system-one that is able to ensure correct behavior with minimal performance degradation even during cyber attacks that compromise part of the system. We describe the challenges we faced when integrating modern intrusion-tolerant protocols with a conventional SCADA architecture and present the techniques we developed to overcome these challenges. The results illustrate that our survivable SCADA system not only functions correctly in the face of a cyber attack, but that it also processes in excess of 20 000 messages per second with a latency of less than 30 ms, making it suitable for even large-scale deployments managing thousands of remote terminal units.

2015-05-01
Marashi, K., Sarvestani, S.S..  2014.  Towards Comprehensive Modeling of Reliability for Smart Grids: Requirements and Challenges. High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on. :105-112.


Smart grids utilize computation and communication to improve the efficacy and dependability of power generation, transmission, and distribution. As such, they are among the most critical and complex cyber-physical systems. The success of smart grids in achieving their stated goals is yet to be rigorously proven. In this paper, our focus is on improvements (or lack thereof) in reliability. We discuss vulnerabilities in the smart grid and their potential impact on its reliability, both generally and for the specific example of the IEEE-14 bus system. We conclude the paper by presenting a preliminary Markov imbedded systems model for reliability of smart grids and describe how it can be evolved to capture the vulnerabilities discussed.
 

Albasrawi, M.N., Jarus, N., Joshi, K.A., Sarvestani, S.S..  2014.  Analysis of Reliability and Resilience for Smart Grids. Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual. :529-534.

Smart grids, where cyber infrastructure is used to make power distribution more dependable and efficient, are prime examples of modern infrastructure systems. The cyber infrastructure provides monitoring and decision support intended to increase the dependability and efficiency of the system. This comes at the cost of vulnerability to accidental failures and malicious attacks, due to the greater extent of virtual and physical interconnection. Any failure can propagate more quickly and extensively, and as such, the net result could be lowered reliability. In this paper, we describe metrics for assessment of two phases of smart grid operation: the duration before a failure occurs, and the recovery phase after an inevitable failure. The former is characterized by reliability, which we determine based on information about cascading failures. The latter is quantified using resilience, which can in turn facilitate comparison of recovery strategies. We illustrate the application of these metrics to a smart grid based on the IEEE 9-bus test system.