Visible to the public Effectiveness and Work Factor Metrics 2015 – 2016 (Part 1)Conflict Detection Enabled

SoS Newsletter- Advanced Book Block

 

 
SoS Logo

Effectiveness and Work Factor Metrics

2015 – 2016 (Part 1)

 

Measurement to determine the effectiveness of security systems is an essential element of the Science of Security. The work cited here was presented in 2015 and 2016.



I. Kotenko and E. Doynikova, “Countermeasure Selection in SIEM Systems Based on the Integrated Complex of Security Metrics,” 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, Turku, 2015, pp. 567-574. doi: 10.1109/PDP.2015.34
Abstract: The paper considers a technique for countermeasure selection in security information and event management (SIEM) systems. The developed technique is based on the suggested complex of security metrics. For the countermeasure selection the set of security metrics is extended with an additional level needed for security decision support. This level is based on the countermeasure effectiveness metrics. Key features of the suggested technique are application of the attack and service dependencies graphs, the introduced model of the countermeasure and the suggested metrics of the countermeasure effectiveness, cost and collateral damage. Other important feature of the technique is providing the solution on the countermeasure implementation in any time on the base of the current security state and security events.
Keywords: decision support systems; graph theory; security of data; software metrics; SIEM systems; attack dependencies graphs; countermeasure selection; integrated complex; security decision support; security events; security information and event management; security metrics; security state; service dependencies graphs; Authentication; Measurement; Risk management; Taxonomy; attack graphs; countermeasures; cyber security; risk assessment  (ID#: 16-10214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092776&isnumber=7092002

 

M. Ge and D. S. Kim, “A Framework for Modeling and Assessing Security of the Internet of Things,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 776-781. doi: 10.1109/ICPADS.2015.102
Abstract: Internet of Things (IoT) is enabling innovative applications in various domains. Due to its heterogeneous and wide scale structure, it introduces many new security issues. To address the security problem, we propose a framework for security modeling and assessment of the IoT. The framework helps to construct graphical security models for the IoT. Generally, the framework involves five steps to find attack scenarios, analyze the security of the IoT through well-defined security metrics, and assess the effectiveness of defense strategies. The benefits of the framework are presented via a study of two example IoT networks. Through the analysis results, we show the capabilities of the proposed framework on mitigating impacts of potential attacks and evaluating the security of large-scale networks.
Keywords: Internet of Things; security of data; IoT networks; defense strategies effectiveness; graphical security models; large-scale networks; security assessment; security metrics; security modeling; Analytical models; Body area networks; Computational modeling; Measurement; Network topology; Security; Wireless communication; Attack Graphs; Hierarchical Attack Representation Model; Security Analysis (ID#: 16-10215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7384365&isnumber=7384203

 

P. Pandey and E. A. Snekkenes, “A Performance Assessment Metric for Information Security Financial Instruments,” Information Society (i-Society), 2015 International Conference on, London, 2015, pp. 138-145. doi: 10.1109/i-Society.2015.7366876
Abstract: Business interruptions caused by cyber-attacks pose a serious threat to revenue and share price of the organisation. Furthermore, recent cyber-attacks on various organisations prove that the technical controls, security policies, and regulatory compliance are not sufficient to mitigate the cyber risks. In such a scenario, the residual cyber risk can be mitigated with cyber-insurance policies and with information security derivatives (financial instruments). Information security derivatives are a new class of financial instruments designed to provide an alternate risk mitigation mechanism to reduce the potential adverse impact of an information security event. However, there is a lack of research on the metrics to measure the performance of information security derivatives in mitigating the underlying risk. This article examines the basic requirements to assess the performance of information security derivatives. Furthermore, the article presents three metrics, namely hedge ratio, hedge effectiveness, and hedge efficiency to formulate and evaluate a cyber risk mitigation strategy devised with information security derivatives. Also, the application of these metrics is demonstrated in an imaginary scenario. The accurate measure of performance of information security derivatives is of practical importance for effective risk management strategy.
Keywords: business data processing; risk management; security of data; business interruptions; cyber risk mitigation strategy; cyber-attacks; cyber-insurance policies; hedge effectiveness; hedge efficiency; hedge ratio; information security derivatives; information security financial instruments; performance assessment metric; regulatory compliance; residual cyber risk; risk management strategy; risk mitigation mechanism; Correlation; Information security; Instruments; Measurement; Portfolios; Risk management; Financial Instrument; Hedge Effectiveness; Hedge Efficiency; Information Security; Risk Management (ID#: 16-10216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366876&isnumber=7366836

 

F. Dai, K. Zheng, S. Luo and B. Wu, “Towards a Multiobjective Framework for Evaluating Network Security Under Exploit Attacks,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7186-7191. doi: 10.1109/ICC.2015.7249473
Abstract: Exploit attacks have been one of the major threats to computer network systems, the damage of which has been extensively studied and numerous countermeasures have been proposed to defend against them. In this work, we propose a multiobjective optimization framework to facilitate evaluation of network security under exploit attacks. Our approach explores a promising avenue of integrating attack graph methodology to evaluate network security. In particular, we innovatively utilize attack graph based security metrics to model exploit attacks and dynamically measure security risk under these attacks. Then a multiobjective problem is formulated to maximize network exploitability and security impact under feasible exploit compositions. Furthermore, an artificial immune algorithm is employed to solve the formulated problem. We conduct a series of simulation experiments on hypothetical network models to testify the performance of proposed mechanism. Simulation results show that our approach can innovatively solve the security evaluation problem under multiple decision variables with feasibility and effectiveness.
Keywords: artificial immune systems; computer network security; graph theory; artificial immune algorithm; attack graph based security metrics; attack graph methodology; computer network security evaluation; exploit attacks; multiobjective optimization framework; security risk; Analytical models; Communication networks; Measurement; Optimization; Security; Sociology; Statistics; attack graph; exploit attack; network security evaluation (ID#: 16-10217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249473&isnumber=7248285

 

B. Duncan and M. Whittington, “The Importance of Proper Measurement for a Cloud Security Assurance Model,” 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), Vancouver, BC, 2015, pp. 517-522. doi: 10.1109/CloudCom.2015.91
Abstract: Defining proper measures for evaluating the effectiveness of an assurance model, which we have developed to ensure cloud security, is vital to ensure the successful implementation and continued running of the model. We need to understand that with security being such an essential component of business processes, responsibility must lie with the board. The board must be responsible for defining their security posture on all aspects of the model, and therefore must also be responsible for defining what the necessary measures should be. Without measurement, there can be no control. However, it will also be necessary to properly engage with cloud service providers to achieve a more meaningful degree of security for the cloud user.
Keywords: business data processing; cloud computing; security of data; business process; cloud security assurance model; cloud service provider; security posture; Cloud computing; Companies; Complexity theory; Privacy; Security; Standards; assurance; audit; cloud service providers; compliance; measurement; privacy; security; service level agreements; standards (ID#: 16-10218)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7396206&isnumber=7396111

 

N. Aminudin, T. K. A. Rahman, N. M. M. Razali, M. Marsadek, N. M. Ramli and M. I. Yassin, “Voltage Collapse Risk Index Prediction for Real Time System’s Security Monitoring,” Environment and Electrical Engineering (EEEIC), 2015 IEEE 15th International Conference on, Rome, 2015, pp. 415-420. doi: 10.1109/EEEIC.2015.7165198
Abstract: Risk based security assessment (RBSA) for power system security deals with the impact and probability of uncertainty to occur in the power system. In this study, the risk of voltage collapse is measured by considering the L-index as the impact of voltage collapse while Poisson probability density function is used to model the probability of transmission line outage. The prediction of voltage collapse risk index in real time requires precise, reliable and short processing time. To facilitate this analysis, Artificial Intelligent using Generalize Regression Neural Network (GRNN) technique is proposed where the spread value is determined using Cuckoo Search (CS) optimization method. To validate the effectiveness of the proposed method, the performance of GRNN with optimized spread value obtained using CS is compared with heuristic approach.
Keywords: Poisson distribution; neural nets; optimisation; power system dynamic stability; power system measurement; power system security; power transmission reliability; probability; real-time systems; risk management; GRNN; L-index; Poisson probability density function; artificial intelligent; cuckoo search; generalize regression neural network; optimization method; real time system; risk based security assessment; security monitoring; transmission line outage; voltage collapse risk index prediction; Indexes; Optimization; Power system stability; Power transmission lines; Security; Transmission line measurements; Voltage measurement; Risk based security assessment; cuckoo search optimization; voltage collapse (ID#: 16-10219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165198&isnumber=7165173

 

H. Jiang, Y. Zhang, J. J. Zhang and E. Muljadi, “PMU-Aided Voltage Security Assessment for a Wind Power Plant,” 2015 IEEE Power & Energy Society General Meeting, Denver, CO, 2015, pp. 1-5. doi: 10.1109/PESGM.2015.7286274
Abstract: Because wind power penetration levels in electric power systems are continuously increasing, voltage stability is a critical issue for maintaining power system security and operation. The traditional methods to analyze voltage stability can be classified into two categories: dynamic and steady-state. Dynamic analysis relies on time-domain simulations of faults at different locations; however, this method needs to exhaust faults at all locations to find the security region for voltage at a single bus. With the widely located phasor measurement units (PMUs), the Thevenin equivalent matrix can be calculated by the voltage and current information collected by the PMUs. This paper proposes a method based on a Thevenin equivalent matrix to identify system locations that will have the greatest impact on the voltage at the wind power plant's point of interconnection. The number of dynamic voltage stability analysis runs is greatly reduced by using the proposed method. The numerical results demonstrate the feasibility, effectiveness, and robustness of the proposed approach for voltage security assessment for a wind power plant.
Keywords: phasor measurement; power system security; power system stability; wind power plants; PMU-aided voltage security assessment; Thevenin equivalent matrix; dynamic voltage stability analysis; electric power systems; phasor measurement units; Phasor measurement units; Power system; fault disturbance recorder; phasor measurement unit; voltage security; wind power plant
(ID#: 16-10220)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286274&isnumber=7285590

 

D. Adrianto and F. J. Lin, “Analysis of Security Protocols and Corresponding Cipher Suites in ETSI M2M Standards,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 777-782. doi: 10.1109/WF-IoT.2015.7389152
Abstract: ETSI, as a standard body in telecommunication industry, has defined a comprehensive set of common security mechanisms to protect the IoT/M2M system. They are Service Bootstrapping, Service Connection, and mId Security. For each mechanism, there are several protocols that we can choose. However, the standards do not describe in what condition a particular protocol will be the best among the others. In this paper we analyze which protocol is the most suitable for the use case where an IoT/M2M application generates a large amount of data in a short period of time. The criteria used include efficiency, cost, and effectiveness of the protocol. Our analysis is done based on the actual measurement of an ETSI standard-compliant prototype.
Keywords: Internet of Things; cryptographic protocols; telecommunication industry; telecommunication security; ETSI M2M standards; ETSI standard-compliant prototype; Internet-of-things; IoT system; cipher suites; common security mechanisms; machine-to-machine communication; security protocol analysis; service bootstrapping; service connection; Authentication; Cryptography; Logic gates; Probes; Protocols; Servers; Machine-to-Machine Communication; The Internet of Things; security protocols (ID#: 16-10221)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389152&isnumber=7389012

 

Y. Jitsui and A. Kajiwara, “Home Security Monitoring Based Stepped-FM UWB,” 2016 International Workshop on Antenna Technology (iWAT), Cocoa Beach, FL, 2016, pp. 189-191. doi: 10.1109/IWAT.2016.7434839
Abstract: This paper presents the effectiveness of stepped-FM UWB home security sensor. UWB sensor has attracted considerable attention it can be expected to detect human body in a home, not room. Then a few schemes had been suggested to detect an intruder in home or room. However, it is important to detect an intruder prior to breaking in the house. This paper suggests a UWB sensor which can detect not only intruder, but also stranger intruding into a house. It can also estimate the intrusion port (window). The measurements were conducted under five scenarios using our fabricated sensor system installed inside a typical four-room apartment house.
Keywords: security; sensors; ultra wideband technology; UWB sensor; fabricated sensor system; four-room apartment house; home security monitoring; intrusion port; stepped-FM UWB; stepped FM UWB home security sensor; Antenna measurements; Monitoring; Routing; Security; Sensor systems; Trajectory; Transmitting antennas; Ultra-wideband; monitoring; sensor; stepped-fm (ID#: 16-10222)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7434839&isnumber=7434773

 

R. Kastner, W. Hu and A. Althoff, “Quantifying Hardware Security Using Joint Information Flow Analysis,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1523-1528. doi: (not provided)
Abstract: Existing hardware design methodologies provide limited methods to detect security flaws or derive a measure on how well a mitigation technique protects the system. Information flow analysis provides a powerful method to test and verify a design against security properties that are typically expressed using the notion of noninterference. While this is useful in many scenarios, it does have drawbacks primarily related to its strict enforcement of limiting all information flows — even those that could only occur in rare circumstances. Quantitative metrics based upon information theoretic measures provide an approach to loosen such restrictions. Furthermore, they are useful in understanding the effectiveness of security mitigations techniques. In this work, we discuss information flow analysis using noninterference and qualitative metrics. We describe how to use them in a synergistic manner to perform joint information flow analysis. And we use this novel technique to analyze security properties across several different hardware cryptographic cores.
Keywords: Control systems; Design methodology; Hardware; Logic gates; Measurement; Mutual information; Security (ID#: 16-10223)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7459555&isnumber=7459269

 

M. S. Salah, A. Maizate and M. Ouzzif, “Security Approaches Based on Elliptic Curve Cryptography in Wireless Sensor Networks,” 2015 27th International Conference on Microelectronics (ICM), Casablanca, 2015, pp. 35-38. doi: 10.1109/ICM.2015.7437981
Abstract: Wireless sensor networks are ubiquitous in monitoring applications, medical control, environmental control and military activities... In fact, a wireless sensor network consists of a set of communicating nodes distributed over an area in order to measure a given magnitude, or receive and transmit data independently to a base station which is connected to the user via the Internet or a satellite, for example. Each node in a sensor network is an electronic device which has calculation capacity, storage, communication and power. However, attacks in wireless sensor networks can have negative impacts on critical network applications leading to the minimization of security within these networks. So it is important to secure these networks in order to maintain their effectiveness. In this paper, we have initially attempted to study approaches oriented towards cryptography and based on elliptic curves, then we have compared the performance of each method relative to others.
Keywords: public key cryptography; telecommunication security; wireless sensor networks; Internet; base station; communicating nodes; critical network applications; electronic device calculation capacity; electronic device communication; electronic device power; electronic device storage; elliptic curve cryptography; environmental control; magnitude measurement; medical control; military activities; monitoring applications; security approaches; security minimization; ubiquitous network; wireless sensor network; Elliptic curve cryptography; Energy consumption; Irrigation; Jamming; Monitoring; Terrorism; AVL???; CECKM; ECC; RECC-C; RECC-D; Security; WSN (ID#: 16-10224)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7437981&isnumber=7437967

 

A. Kargarian, Yong Fu and Zuyi Li, “Distributed Security-Constrained Unit Commitment for Large-Scale Power Systems,” 2015 IEEE Power & Energy Society General Meeting, Denver, CO, 2015, pp. 1-1. doi: 10.1109/PESGM.2015.7286540
Abstract: Summary from only given. Independent system operators (ISOs) of electricity markets solve the security-constrained unit commitment (SCUC) problem to plan a secure and economic generation schedule. However, as the size of power systems increases, the current centralized SCUC algorithm could face critical challenges ranging from modeling accuracy to calculation complexity. This paper presents a distributed SCUC (D-SCUC) algorithm to accelerate the generation scheduling of large-scale power systems. In this algorithm, a power system is decomposed into several scalable zones which are interconnected through tie lines. Each zone solves its own SCUC problem and a parallel calculation method is proposed to coordinate individual D-SCUC problems. Several power systems are studied to show the effectiveness of the proposed algorithm.
Keywords: distributed algorithms; power generation economics; power markets; power system security; scheduling; D-SCUC problems; distributed SCUC algorithm; distributed security-constrained unit commitment; economic generation schedule; independent system operators; large-scale power systems; parallel calculation method; security-constrained unit commitment problem; Computers; Distance measurement; Economics; Electricity supply industry; Face; Power systems; Schedules (ID#: 16-10225)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286540&isnumber=7285590

 

M. Cayford, “Measures of Success: Developing a Method for Evaluating the Effectiveness of Surveillance Technology,” Intelligence and Security Informatics Conference (EISIC), 2015 European, Manchester, 2015, pp. 187-187. doi: 10.1109/EISIC.2015.33
Abstract: This paper presents a method for evaluating the effectiveness of surveillance technology in intelligence work. The method contains measures against which surveillance technology would be assessed to determine its effectiveness. Further research, based on interviews of experts, will inform the final version of this method, including a weighting system.
Keywords: surveillance; terrorism; Sproles method; counterterrorism; surveillance technology; Current measurement; Interviews; Privacy; Security; Standards; Surveillance; Weight measurement; effectiveness; intelligence; measures; method; technology
(ID#: 16-10226)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7379756&isnumber=7379706

 

S. Schinagl, K. Schoon and R. Paans, “A Framework for Designing a Security Operations Centre (SOC),” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 2253-2262. doi: 10.1109/HICSS.2015.270
Abstract: Owning a SOC is an important status symbol for many organizations. Although the concept of a 'SOC' can be considered a hype, only a few of them are actually effective in counteracting cybercrime and IT abuse. A literature review reveals that there is no standard framework available and no clear scope or vision on SOCs. In most of the papers, specific implementations are described, although often with a commercial purpose. Our research was focused on identifying and defining the generic building blocks for a SOC, to draft a design framework. In addition, a measurement method has been developed to assess the effectiveness of the protection provided by a SOC.
Keywords: computer crime; IT abuse; SOC; Security Operations Centre design; cybercrime; measurement method; Conferences; Monitoring; Organizations; Security; Standards organizations; System-on-chip; IT Abuse; Intelligence; Value; baseline security; continuous monitoring; damage control; forensic; framework; model; monitoring; pentest; secure service development; sharing knowledge (ID#: 16-10227)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070084&isnumber=7069647

 

Y. Wu, T. Wang and J. Li, “Effectiveness Analysis of Encrypted and Unencrypted Bit Sequence Identification Based on Randomness Test,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1588-1591. doi: 10.1109/IMCCC.2015.337
Abstract: Encrypted and unencrypted bit sequences identification has great significance of network management. Compared with unencrypted bit sequences, encrypted bit sequences are more random. Randomness tests are used to evaluate the security of cipher algorithms. Whether they could be used to identify the encrypted and unencrypted bit sequences still need to do some further research. We introduced the principle of randomness tests at first. According to the input size limit of each test in the SP800-22 rev1a standard, we selected frequency test, frequency test within a block, runs test, longest run of ones in a block test and cumulative sums test to identify encrypted and unencrypted bit sequences. At the time, we analyzed the preconditions of the selected tests to successfully identify the encrypted and unencrypted bit sequences, then presented the relevant conclusions. Finally, the effectiveness of the selected tests is verified according to the experiments.
Keywords: cryptography; SP800-22 rev1a standard; block test; cipher algorithms; cumulative sums test; effectiveness analysis; frequency test; network management; randomness test; security evaluation; unencrypted bit sequence identification; Ciphers; Encryption; Probability; Protocols; Standards; bit sequences; cipher algorithm; cumulative sums; encryption (ID#: 16-10228)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7406118&isnumber=7405778

 

X. Yuan, P. Tu and Y. Qi, “Sensor Bias Jump Detection and Estimation with Range Only Measurements,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1658-1663. doi: 10.1109/ICInfA.2015.7279552
Abstract: A target can be positioned by wireless communication sensors. In practical system, the range based sensors may have biased measurements. The biases are mostly constant value, but they may jump abruptly in some special scenarios. An on-line bias change detection and estimation algorithm is presented in this paper. This algorithm can detect the jump bias based on Chi-Square Test, and then estimate the jump bias through Modified Augmented Extended Kalman filter. The feasibility and effectiveness of the proposed algorithms are illustrated in comparison with the Augmented Extended Kalman filter by simulations.
Keywords: Kalman filters; estimation theory; nonlinear filters; sensors; Chi-Square test; modified augmented extended Kalman filter; online bias change detection algorithm; online bias change estimation algorithm; range only measurement; sensor bias jump detection; sensor bias jump estimation; wireless communication sensors; Change detection algorithms; Estimation; Noise; Position measurement; Wireless communication; Bias estimation; Jump of bias; Wireless positioning systems; range-only measurements (ID#: 16-10229)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7279552&isnumber=7279248

 

M. M. Hasan and H. T. Mouftah, “Encryption as a Service for Smart Grid Advanced Metering Infrastructure,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 216-221. doi: 10.1109/ISCC.2015.7405519
Abstract: Smart grid advanced metering infrastructure (AMI) bridges between consumers, utilities, and market. Its operation relies on large scale communication networks. At the lowest level, information are acquired by smart meters and sensors. At the highest level, information are stored and processed by smart grid control centers for various purposes. The AMI conveys a big amount of sensitive information. Prevention of unauthorized access to these information is a major concern for smart grid operators. Encryption is the primary security measure for preventing unauthorized access. It incurs various overheads and deployment costs. In recent times, the security as a service (SECaaS) model has introduced a number cloud-based security solutions such as encryption as a service (EaaS). It promises the speed and cost-effectiveness of cloud computing. In this paper, we propose a framework named encryption service for smart grid AMI (ES4AM). The ES4AM framework focuses on lightweight encryption of in-flight AMI data. We also study the feasibility of the framework using relevant simulation results.
Keywords: cloud computing; cryptography; power engineering computing; power markets; power system control; power system measurement; sensors; smart power grids; telecommunication security; ES4AM; EaaS; SECaaS model; communication networks; encryption as a service; number cloud-based security solutions; primary security measure; security as a service; smart grid AMI; smart grid advanced metering infrastructure; smart grid control centers; smart grid operators; unauthorized access; Cloud computing; Encryption; Public key; Servers; Smart grids; encryption; managed security; smart grid (ID#: 16-10230)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7405519&isnumber=7405441

 

Z. Hu, Y. Wang, X. Tian, X. Yang, D. Meng and R. Fan, “False Data Injection Attacks Identification for Smart Grids,” Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2015 Third International Conference on, Beirut, 2015, pp. 139-143. doi: 10.1109/TAEECE.2015.7113615
Abstract: False Data Injection Attacks (FDIA) in Smart Grid is considered to be the most threatening cyber-physics attack. According to the variety of measurement categories in power system, a new method for false data detection and identification is presented. The main emphasis of our research is that we have equivalent measurement transformation instead of traditional weighted least squares state estimation in the process of SE and identify false data by the residual researching method. In this paper, one FDIA attack case in IEEE 14 bus system is designed by exploiting the MATLAB to test the effectiveness of the algorithm. Using this method the false data can be effectively dealt with.
Keywords: IEEE standards; power system security; security of data; smart power grids; FDIA; IEEE 14 bus system; SE; cyberphysical attack threatening; equivalent measurement transformation; false data injection attack identification; power system; residual researching method; smart grid; Current measurement; Pollution measurement; Power measurement; Power systems; State estimation; Transmission line measurements; Weight measurement; false data detection and identification; false data injection attacks; smart grid (ID#: 16-10231)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113615&isnumber=7113589

 

A. K. Al-Khamis and A. A. Khalafallah, “Secure Internet on Google Chrome: Client Side Anti-Tabnabbing Extension,” Anti-Cybercrime (ICACC), 2015 First International Conference on, Riyadh, 2015, pp. 1-4. doi: 10.1109/Anti-Cybercrime.2015.7351942
Abstract: Electronic transactions rank the top on our daily transactions. Internet became invaluable for government, business, and personal use. This occurred in synchronization with the great increase in online attacks, particularly the development of newest forms from known attacks such as Tabnabbing. Thus, users' confidentiality and personal information must be protected using information security. Tabnabbing is a new form of phishing. The attacker needs nothing to steal credentials except users' preoccupation with other work and exploitation of human memory weakness. The impact of this malicious attempt begins with identity theft and ends with financial loss. That has encouraged some security specialists and researchers to tackle tabnabbing attack, but their studies are still in their infancy and not sufficient. The work done here focuses on developing an effective anti-tabnabbing extension for the Google Chrome browser to protect Internet users from been victims as well as raise their awareness. The system developed has a novel significance due to its effectiveness in detecting a tabnabbing attack and the combination of two famous approaches used to combat online attacks. The success of the system was examined by performance measurements such as confusion matrix and ROC. The system produces promising results.
Keywords: Internet; security of data; Google Chrome browser; Internet users; ROC; client side anti-tabnabbing extension; confusion matrix; electronic transactions; financial loss; human memory weakness; information security; online attacks; personal information; phishing; secure Internet; security specialists; synchronization; tabnabbing attack; Browsers; Business; HTML; Matrix converters; Security; Uniform resource locators; Browser security; Detection; Google Extension; Phishing; Social engineering; Tabnabbing attack; Usable security (ID#: 16-10232)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351942&isnumber=7351910

 

N. Soule et al., “Quantifying & Minimizing Attack Surfaces Containing Moving Target Defenses,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287449
Abstract: The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
Keywords: security of data; attack surface minimization; cyber friendly fire; cyber security exposure; dynamic proactive defenses; moving target defenses; resilient systems; timeliness violations; Analytical models; Computational modeling; IP networks; Measurement; Minimization; Security; Surface treatment; cyber security analysis; modeling; threat assessment (ID#: 16-10233)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287449&isnumber=7287407

 

K. A. Torkura, F. Cheng and C. Meinel, “A Proposed Framework for Proactive Vulnerability Assessments in Cloud Deployments,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015,
pp. 51-57. doi: 10.1109/ICITST.2015.7412055
Abstract: Vulnerability scanners are deployed in computer networks and software to timely identify security flaws and misconfigurations. However, cloud computing has introduced new attack vectors that requires commensurate change of vulnerability assessment strategies. To investigate the effectiveness of these scanners in cloud environments, we first conduct a quantitative security assessment of OpenStack's vulnerability lifecycle and discover severe risk levels resulting from prolonged patch release duration. More specifically, there are long time lags between OpenStack patch releases and patch inclusion in vulnerability scanning engines. This scenario introduces sufficient time for malicious actions and creation of exploits such as zero-days. Mitigating these concern requires systems with current knowledge on events within the vulnerability lifecycle. However, current vulnerability scanners are designed to depend on information about publicly announced vulnerabilities which mostly includes only vulnerability disclosure dates. Accordingly, we propose a framework that would mitigate these risks by gathering and correlating information from several security information sources including exploit databases, malware signature repositories and Bug Tracking Systems. The information is thereafter used to automatically generate plugins armed with current information about zero-day exploits and unknown vulnerabilities. We have characterized two new security metrics to describe the discovered risks.
Keywords: cloud computing; invasive software; OpenStack vulnerability lifecycle; attack vector; bug tracking system; cloud deployment; exploit databases; malware signature repositories; proactive vulnerability assessment; security flaws; vulnerability scanner; Cloud computing; Databases; Engines; Measurement; Security; Cloud security; cloud vulnerabilities; security metrics; vulnerability lifecycle; vulnerability signature; zero-days (ID#: 16-10234)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7412055&isnumber=7412034

 

S. K. Rao, D. Krishnankutty, R. Robucci, N. Banerjee and C. Patel, “Post-Layout Estimation of Side-Channel Power Supply Signatures,” Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, Washington, DC, 2015,
pp. 92-95. doi: 10.1109/HST.2015.7140244
Abstract: Two major security challenges for integrated circuits (IC) that involve encryption cores are side-channel based attacks and malicious hardware insertions (trojans). Side-channel attacks predominantly use power supply measurements to exploit the correlation of power consumption with the underlying logic operations on an IC. Practical attacks have been demonstrated using power supply traces and either plaintext or cipher-text collected during encryption operations. Also, several techniques that detect trojans rely on detecting anomalies in the power supply in combination with other circuit parameters. Counter-measures against these side-channel attacks as well as detection schemes for hardware trojans are required and rely on accurate pre-fabrication power consumption predictions. However, available state-of-the-art techniques would require prohibitive full-chip SPICE simulations. In this work, we present an optimized technique to accurately estimate the power supply signatures that require significantly less computational resources, thus enabling integration of Design-for-Security (DfS) based paradigms. To demonstrate the effectiveness of our technique, we present data for a DES crypto-system that proves that our framework can identify vulnerabilities to Differential Power Analysis (DPA) attacks. Our framework can be generically applied to other crypto-systems and can handle larger IC designs without loss of accuracy.
Keywords: cryptography; estimation theory; integrated circuit layout; logic testing; power consumption; power supply circuits; security; DES cryptosystem; DPA; DfS; IC; SPICE simulation; anomaly detection; cipher-text; design-for-security; differential power analysis; encryption core; hardware trojan; integrated circuit; logic operation; malicious hardware insertion; plaintext; post-layout estimation; power consumption correlation; power supply measurement; power supply tracing; practical attack; prefabrication power consumption prediction; side-channel based attack; side-channel power supply signature estimation; Correlation; Hardware; Integrated circuits; Power supplies; SPICE; Security; Transient analysis; Hardware Security; Power Supply analysis; Side-channel attacks; Trojan Detection (ID#: 16-10235)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140244&isnumber=7140225

 

S. Abraham and S. Nair, “Exploitability Analysis Using Predictive Cybersecurity Framework,” Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, Gdynia, 2015, pp. 317-323. doi: 10.1109/CYBConf.2015.7175953
Abstract: Managing Security is a complex process and existing research in the field of cybersecurity metrics provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We need a new generation of metrics that can enable enterprises to react even faster in order to properly protect mission-critical systems in the midst of both undiscovered and disclosed vulnerabilities. In this paper, we propose a practical and predictive security model for exploitability analysis in a networking environment using stochastic modeling. Our model is built upon the trusted CVSS Exploitability framework and we analyze how the atomic attributes namely Access Complexity, Access Vector and Authentication that make up the exploitability score evolve over a specific time period. We formally define a nonhomogeneous Markov model which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate. The daily transition-probability matrices in our study are estimated using a combination of Frei's model & Alhazmi Malaiya's Logistic model. An exploitability analysis is conducted to show the feasibility and effectiveness of our proposed approach. Our approach enables enterprises to apply analytics using a predictive cyber security model to improve decision making and reduce risk.
Keywords: Markov processes; authorisation; decision making; risk management; access complexity; access vector; authentication; daily transition-probability matrices; exploitability analysis; nonhomogeneous Markov model; predictive cybersecurity framework; risk reduction; trusted CVSS exploitability framework; vulnerability age; vulnerability discovery rate; Analytical models; Computer security; Measurement; Predictive models; Attack Graph; CVSS; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecycle Model (ID#: 16-10236)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175953&isnumber=7175890

 

C. Callegari, S. Giordano and M. Pagano, “Histogram Cloning and CuSum: An Experimental Comparison Between Different Approaches to Anomaly Detection,” Performance Evaluation of Computer and Telecommunication Systems (SPECTS), 2015 International Symposium on, Chicago, IL, 2015, pp. 1-7. doi: 10.1109/SPECTS.2015.7285294
Abstract: Due to the proliferation of new threats from spammers, attackers, and criminal enterprises, Anomaly-based Intrusion Detection Systems have emerged as a key element in network security and different statistical approaches have been considered in the literature. To cope with scalability issues, random aggregation through the use of sketches seems to be a powerful prefiltering stage that can be applied to backbone data traffic. In this paper we compare two different statistical methods to detect the presence of anomalies from such aggregated data. In more detail, histogram cloning (with different distance measurements) and CuSum algorithm (at the bucket level) are tested over A well-known publicly available data set. The performance analysis, presented in this paper, demonstrates the effectiveness of the CuSum when a proper definition of the algorithm, which takes into account the standard deviation of the underlying variables, is chosen.
Keywords: computer network security; data analysis; statistical analysis; CuSum algorithm; aggregated data anomalies; anomaly based intrusion detection systems; backbone data traffic; bucket level; cumulative sum control chart statistics; histogram cloning; network security; scalability issues; statistical methods; Aggregates; Algorithm design and analysis; Cloning; Histograms; Mathematical model; Monitoring; Standards; Anomaly Detection; CUSUM; Histogram Cloning; Network Security; Statistical Traffic Analysis
(ID#: 16-10237)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285294&isnumber=7285273

 

K. Xiong and X. Chen, “Ensuring Cloud Service Guarantees via Service Level Agreement (SLA)-Based Resource Allocation,” 2015 IEEE 35th International Conference on Distributed Computing Systems Workshops, Columbus, OH, 2015, pp. 35-41. doi: 10.1109/ICDCSW.2015.18
Abstract: This paper studies the problem of resource management and placement for high performance clouds. It is concerned with the three most important performance metrics: response time, throughput, and utilization as Quality of Service (QoS) metrics defined in a Service Level Agreement (SLA). We propose SLA-based approaches for resource management in clouds. Specifically, we first quantify the metrics of trustworthiness, a percentile of response time, and availability. Then, this paper provides the formulation of cloud resource management as a nonlinear optimization problem subject to SLA requirements and further gives the solution of the problem. Finally, we give a solution of this nonlinear optimization problem and demonstrate the effectiveness of proposed solutions through illustrative examples.
Keywords: cloud computing; contracts; nonlinear programming; resource allocation; SLA-based approaches; SLA-based resource allocation; cloud service guarantees; nonlinear optimization problem; quality of service metrics; resource management; service level agreement; Cloud computing; Measurement; Quality of service; Resource management; Security; Servers; Time factors; Performance; Resource Allocation; Service Level Agreement (ID#: 16-10238)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165081&isnumber=7165001

 

G. Sabaliauskaite, G. S. Ng, J. Ruths and A. P. Mathur, “Empirical Assessment of Methods to Detect Cyber Attacks on a Robot,” 2016 IEEE 17th International Symposium on High Assurance Systems Engineering (HASE), Orlando, FL, 2016, pp. 248-251. doi: 10.1109/HASE.2016.19
Abstract: An experiment was conducted using a robot to investigate the effectiveness of four methods for detecting cyber attacks and analyzing robot failures. Cyber attacks were implemented on three robots of the same make and model through their wireless control mechanisms. Analysis of experimental data indicates the differences in attack detection effectiveness across the detection methods. A method that compares sensors values at each time step to the average historical values, was the most effective. Further, the attack detection effectiveness was the same or lower in actual robots as compared to simulation. Factors such as attack size and timing, influenced attack detection effectiveness.
Keywords: security of data; telerobotics; cyber attack detection; robot failure analysis; wireless control mechanisms; Computer crashes; Data models; Robot sensing systems; Time measurement; cyber-attacks; cyber-physical systems; robots; safety; security (ID#: 16-10239)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7423162&isnumber=7423114

 

Y. Mo and R. M. Murray, “Multi-Dimensional State Estimation in Adversarial Environment,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 4761-4766. doi: 10.1109/ChiCC.2015.7260376
Abstract: We consider the estimation of a vector state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have limited resources and can only manipulate up to l of the m measurements. However, it can the compromise measurements arbitrarily. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” error against all possible manipulations by the attacker and all possible sensor noises. We show that if the system is not observable after removing 2l sensors, then the worst-case error is infinite, regardless of the estimation strategy. If the system remains observable after removing arbitrary set of 2l sensor, we prove that the optimal state estimation can be computed by solving a semidefinite programming problem. A numerical example is provided to illustrate the effectiveness of the proposed state estimator.
Keywords: mathematical programming; minimax techniques; state estimation; adversarial environment; minimax optimization; multidimensional state estimation; semidefinite programming problem; vector state estimation; worst-case error; Indexes; Noise; Optimization; Robustness; Security; State estimation; Estimation; Security (ID#: 16-10240)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260376&isnumber=7259602

 

N. Antunes and M. Vieira, “On the Metrics for Benchmarking Vulnerability Detection Tools,” 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Rio de Janeiro, 2015, pp. 505-516. doi: 10.1109/DSN.2015.30
Abstract: Research and practice show that the effectiveness of vulnerability detection tools depends on the concrete use scenario. Benchmarking can be used for selecting the most appropriate tool, helping assessing and comparing alternative solutions, but its effectiveness largely depends on the adequacy of the metrics. This paper studies the problem of selecting the metrics to be used in a benchmark for software vulnerability detection tools. First, a large set of metrics is gathered and analyzed according to the characteristics of a good metric for the vulnerability detection domain. Afterwards, the metrics are analyzed in the context of specific vulnerability detection scenarios to understand their effectiveness and to select the most adequate one for each scenario. Finally, an MCDA algorithm together with experts' judgment is applied to validate the conclusions. Results show that although some of the metrics traditionally used like precision and recall are adequate in some scenarios, others require alternative metrics that are seldom used in the benchmarking area.
Keywords: invasive software; software metrics; MCDA algorithm; alternative metrics; benchmarking vulnerability detection tool; software vulnerability detection tool; Benchmark testing; Concrete; Context; Measurement; Security; Standards; Automated Tools; Benchmarking; Security Metrics; Software Vulnerabilities; Vulnerability Detection (ID#: 16-10241)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266877&isnumber=7266818

 

D. Evangelista, F. Mezghani, M. Nogueira and A. Santos, “Evaluation of Sybil Attack Detection Approaches in the Internet of Things Content Dissemination,” 2016 Wireless Days (WD), Toulouse, France, 2016, pp. 1-6. doi: 10.1109/WD.2016.7461513
Abstract: The Internet of Things (IoT) comprises a diversity of heterogeneous objects that collects data in order to disseminate information to applications. The IoT data dissemination service can be tampered by several types of attackers. Among these, the Sybil attack emerged as the most critical since it operates in the data confidentiality. Although there are approaches against Sybil attack in several services, they disregard the presence of heterogeneous devices and have complex solutions. This paper presents a study highlighting strengths and weaknesses of Sybil attack detection approaches when applied in the IoT content dissemination. An evaluation of the LSD solution was made to assess its effectiveness and efficiency in a IoT network.
Keywords: Authentication; Cryptography; Feature extraction; Internet of things; Measurement; Security and privacy in the Internet of Things; Security in networks; Sybil Detection Techniques (ID#: 16-10242)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7461513&isnumber=7461453

 

X. Yang, D. Lo, X. Xia, Y. Zhang and J. Sun, “Deep Learning for Just-in-Time Defect Prediction,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 17-26. doi: 10.1109/QRS.2015.14
Abstract: Defect prediction is a very meaningful topic, particularly at change-level. Change-level defect prediction, which is also referred as just-in-time defect prediction, could not only ensure software quality in the development process, but also make the developers check and fix the defects in time. Nowadays, deep learning is a hot topic in the machine learning literature. Whether deep learning can be used to improve the performance of just-in-time defect prediction is still uninvestigated. In this paper, to bridge this research gap, we propose an approach Deeper which leverages deep learning techniques to predict defect-prone changes. We first build a set of expressive features from a set of initial change features by leveraging a deep belief network algorithm. Next, a machine learning classifier is built on the selected features. To evaluate the performance of our approach, we use datasets from six large open source projects, i.e., Bugzilla, Columba, JDT, Platform, Mozilla, and PostgreSQL, containing a total of 137,417 changes. We compare our approach with the approach proposed by Kamei et al. The experimental results show that on average across the 6 projects, Deeper could discover 32.22% more bugs than Kamei et al's approach (51.04% versus 18.82% on average). In addition, Deeper can achieve F1-scores of 0.22-0.63, which are statistically significantly higher than those of Kamei et al.'s approach on 4 out of the 6 projects.
Keywords: just-in-time; learning (artificial intelligence); pattern classification; software quality; change-level defect prediction; deep learning; just-in-time defect prediction; machine learning classifier; machine learning literature; Computer bugs; Feature extraction; Logistics; Machine learning; Measurement; Software quality; Training; Cost Effectiveness; Deep Belief Network; Deep Learning;
Just-In-Time Defect Prediction (ID#: 16-10243)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272910&isnumber=7272893

 

H. Hemmati, “How Effective Are Code Coverage Criteria?,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 151-156. doi: 10.1109/QRS.2015.30
Abstract: Code coverage is one of the main metrics to measure the adequacy of a test case/suite. It has been studied a lot in academia and used even more in industry. However, a test case may cover a piece of code (no matter what coverage metric is being used) but miss its faults. In this paper, we studied several existing and standard control and data flow coverage criteria on a set of developer-written fault-revealing test cases from several releases of five open source projects. We found that a) basic criteria such as statement coverage is very weak (detecting only 10% of the faults), b) combining several control-flow coverage together is better than the strongest criterion alone (28% vs. 19%), c) a basic data-flow coverage can detect many undetected faults (79% of the undetected faults by control-flow coverage can be detected by a basic def/use pair coverage), and d) on average 15% of the faults may not be detected by any of the standard control and data-flow coverage criteria. Classification of the undetected faults showed that they are mostly to do with specification (missing logic).
Keywords: data flow analysis; program testing; public domain software; software quality; code coverage criteria; control-flow coverage; data flow coverage criteria; developer-written fault-revealing test cases; missing logic; open source projects; statement coverage; Arrays; Data mining; Fault diagnosis; Instruments; Java; Measurement; Testing; Code Coverage; Control Flow; Data Flow; Effectiveness; Experiment; Fault Categorization; Software Testing (ID#: 16-10244)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272926&isnumber=7272893

 

S. Sanders and J. Kaur, “Can Web Pages Be Classified Using Anonymized TCP/IP Headers?,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 2272-2280. doi: 10.1109/INFOCOM.2015.7218614
Abstract: Web page classification is useful in many domains- including ad targeting, traffic modeling, and intrusion detection. In this paper, we investigate whether learning-based techniques can be used to classify web pages based only on anonymized TCP/IP headers of traffic generated when a web page is visited. We do this in three steps. First, we select informative TCP/IP features for a given downloaded web page, and study which of these remain stable over time and are also consistent across client browser platforms. Second, we use the selected features to evaluate four different labeling schemes and learning-based classification methods for web page classification. Lastly, we empirically study the effectiveness of the classification methods for real-world applications.
Keywords: Web sites; online front-ends; security of data; telecommunication traffic; transport protocols; TCP/IP header; Web page classification; ad targeting; client browser platforms; intrusion detection; labeling schemes; learning-based classification methods; learning-based techniques; traffic modeling; Browsers; Feature extraction; IP networks; Labeling; Navigation; Streaming media; Web pages; Traffic Classification; Web Page Measurement (ID#: 16-10245)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218614&isnumber=7218353

 

X. Luo, J. Li, Z. Jiang and X. Guan, “Complete Observation Against Attack Vulnerability for Cyber-Physical Systems with Application to Power Grids,” 2015 5th International Conference on Electric Utility Deregulation and Restructuring and Power Technologies (DRPT), Changsha, 2015, pp. 962-967. doi: 10.1109/DRPT.2015.7432368
Abstract: This paper presents a novel framework based on system observability to solve the structural vulnerability of cyber-physical systems (CPSs) under attack with application to power grids. The adding power measurement point method is applied to detect the angle and voltage of the bus by adding detection points between two bus lines in the power grid. Then the generator dynamic equations are built to analyze the rotor angles and angular velocities of generators, and the system is simplified by the Power Management Units (PMUs) to observe the status on the generators. According to the impact of a series of attacks on the grid, we make use of grid measurements detection and state estimation to achieve observation status on the grid, enabling the monitor of the entire grid. Finally it is shown that the structural vulnerability against attacks can be solved by combining with the above-mentioned observations. Finally, some simulations are used to demonstrate the effectiveness of the proposed method. It is shown that some attacks can be effectively monitored to improve CPS security.
Keywords: electric generators; phasor measurement; power grids; power system protection; rotors; CPS security; PMU; attack vulnerability; cyber-physical systems; generator dynamic equations; grid measurements detection; power management units; power measurement point; state estimation; structural vulnerability; system observability; Angular velocity; Generators; Monitoring; Phasor measurement units; Power grids; Power measurement; Rotors; CPS; observation; structural vulnerability; undetectable attack
(ID#: 16-10246)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7432368&isnumber=7432193

 

L. M. Putranto, R. Hara, H. Kita and E. Tanaka, “Risk-Based Voltage Stability Monitoring and Preventive Control Using Wide Area Monitoring System,” PowerTech, 2015 IEEE Eindhoven, Eindhoven, 2015, pp. 1-6. doi: 10.1109/PTC.2015.7232547
Abstract: nowadays, power system tends to be operated in heavily stressed load, which can cause voltage stability problem. Moreover, occurrence probability of contingency is increasing due to growth of power system size and complexity. This paper proposes a new preventive control scheme based on voltage stability and security monitoring by means of wide area monitoring systems (WAMS). The proposed control scheme ensures voltage stability under major N-1 line contingencies, which are selected from all possible N-1 contingencies considering their occurrence probability and/or causing load curtailment. Some cases based on IEEE 57-bus test system are used to demonstrate the effectiveness of the proposed method. The demonstration results show that the proposed method can provide important contribution in improving voltage stability and security performance.
Keywords: IEEE standards; load regulation; power system control; power system economics; power system measurement; power system security; power system stability; probability; IEEE 57-bus test system; N-1 line contingency probability; WAMS; load curtailment; power system complexity; power system operation; preventive control scheme; risk-based voltage stability monitoring; security monitoring; wide area monitoring system; Fuels; Generators; Indexes; Power system stability; Security; Stability criteria; Voltage control; Economics Load Dispatch; Load Shedding; Multistage Preventive Control; Optimum Power Flow; Voltage Stability Improvement (ID#: 16-10247)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7232547&isnumber=7232233

 

P. Zhonghua, H. Fangyuan, Z. Yuguo and S. Dehui, “False Data Injection Attacks for Output Tracking Control Systems,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 6747-6752. doi: 10.1109/ChiCC.2015.7260704
Abstract: Cyber-physical systems (CPSs) have been gaining popularity with their high potential in widespread applications, and the security of CPSs becomes a rigorous problem. In this paper, an output track control (OTC) method is designed for discrete-time linear time-invariant Gaussian systems. The output tracking error is regarded as an additional state, Kalman filter-based incremental state observer and LQG-based augmented state feedback control strategy are designed, and Euclidean-based detector is used for detecting the false data injection attacks. Stealthy false data attacks which can completely disrupt the normal operation of the OTC systems without being detected are injected into the sensor measurements and control commands, respectively. Three kinds of numerical examples are employed to illustrate the effectiveness of the designed false data injection attacks.
Keywords: Gaussian processes; Kalman filters; discrete time systems; linear systems; observers; security of data; sensors; state feedback; CPS security; Euclidean-based detector; Kalman filter-based incremental state observer; LQG-based augmented state feedback control strategy; OTC method; OTC systems; cyber-physical systems; discrete-time linear time-invariant Gaussian systems; false data injection attacks; output track control method; output tracking control systems; output tracking error; sensor measurements; Detectors; Robot sensing systems; Security; State estimation; State feedback; Cyber-physical systems; Kalman filter; false data injection attacks; output tracking control (ID#: 16-10248)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7260704&isnumber=7259602

 

A. Basak, F. Zhang and S. Bhunia, “PiRA: IC Authentication Utilizing Intrinsic Variations in Pin Resistance,” Test Conference (ITC), 2015 IEEE International, Anaheim, CA, 2015, pp. 1-8. doi: 10.1109/TEST.2015.7342388
Abstract: The rapidly rising incidences of counterfeit Integrated Circuits (ICs) including cloning attacks pose a significant threat to the semiconductor industry. Conventional functional/structural testing are mostly ineffective to identify different forms of cloned ICs. On the other hand, existing design for security (DfS) measures are often not attractive due to additional design effort, hardware overhead and test cost. In this paper, we propose a novel robust IC authentication approach, referred to as PiRA, to validate the integrity of ICs in presence of cloning attacks. It exploits intrinsic random variations in pin resistances across ICs to create unique chip-specific signatures for authentication. Pin resistance is defined as the resistance looking into or out the pin according to set parameters and biasing conditions, measured by standard tests for IC defect/performance analysis such as input leakage, protection diode and output load current tests. A major advantage of PiRA over existing methodologies is that it incurs virtually zero design effort and overhead. Furthermore, unlike most authentication approaches, it works for all chip types including analog/mixed-signal ICs and can be applied to legacy designs. Theoretical analysis as well as experimental measurements with common digital and analog ICs verify the effectiveness of PiRA.
Keywords: authorisation; integrated circuit testing; mixed analogue-digital integrated circuits; semiconductor diodes; IC defect-performance analysis; PiRA; analog-mixed-signal IC; biasing conditions; chip-specific signatures; cloning attacks; counterfeit integrated circuits; design for security; functional-structural testing; input leakage intrinsic random variations; intrinsic variations; output load current tests; pin resistance; protection diode; robust IC authentication approach; semiconductor industry; set parameters; Authentication; Cloning; Current measurement; Electrical resistance measurement; Integrated circuits; Resistance; Semiconductor device measurement (ID#: 16-10249)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342388&isnumber=7342364

 

M. Shiozaki, T. Kubota, T. Nakai, A. Takeuchi, T. Nishimura and T. Fujino, “Tamper-Resistant Authentication System with Side-Channel Attack Resistant AES and PUF Using MDR-ROM,” 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, 2015, pp. 1462-1465. doi: 10.1109/ISCAS.2015.7168920
Abstract: As a threat of security devices, side-channel attacks (SCAs) and invasive attacks have been identified in the last decade. The SCA reveals a secret key on a cryptographic circuit by measuring power consumption or electromagnetic radiation during the cryptographic operations. We have proposed the MDR-ROM scheme as the low-power and small-area counter-measure against SCAs. Meanwhile, secret data in a nonvolatile memory is analyzed by invasive attacks, and the cryptographic device is counterfeited and cloned by an adversary. We proposed to combine the MDR-ROM scheme with the Physical Unclonable Function (PUF) technique, which is expected as the counter-measure against the counterfeit, and the prototype chip was fabricated with a 180nm CMOS technology. In addition, the keyless entry demonstration system was produced in order to present the effectiveness of SCA resistance and PUF technique. Our experiments confirmed that this demonstration system achieved sufficient tamper resistance.
Keywords: CMOS integrated circuits; cryptography; random-access storage; read-only storage; 180nm CMOS technology; AES; MDR-ROM scheme; PUF; SCA; cryptographic circuit; cryptographic operations; electromagnetic radiation measurement; invasive attacks; low-power counter-measure; nonvolatile memory; physical unclonable function technique; power consumption measurement; secret key; security devices; side-channel attack resistant; small-area counter-measure; tamper-resistant authentication system; Authentication; Correlation; Cryptography; Large scale integration; Power measurement; Read only memory; Resistance; IO-masked dual-rail ROM (MDR-ROM); Siede channel attacks (SCA); physical unclonable function (PUF); tamper-resistant authentication system (ID#: 16-10250)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168920&isnumber=7168553

 

S. Alsemairi and M. Younis, “Clustering-Based Mitigation of Anonymity Attacks in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417501
Abstract: The use of wireless sensor networks (WSNs) can be advantageous in applications that serve in hostile environments such as security surveillance and military battlefield. The operation of a WSN typically involves collection of sensor measurements at an in-situ Base-Station (BS) that further processes the data and either takes action or reports findings to a remote command center. Thus the BS plays a vital role and is usually guarded by concealing its identity and location. However, the BS can be susceptible to traffic analysis attack. Given the limited communication range of the individual sensors and the objective of conserving their energy supply, the sensor readings are forwarded to the BS over multi-hop paths. Such a routing topology allows an adversary to correlate intercepted transmissions, even without being able to decode them, and apply attack models such as Evidence Theory (ET) in order to determine the position of the BS. This paper proposes a technique to counter such an attack by reshaping the routing topology. Basically, the nodes in a WSN are grouped in unevenly-sized clusters and each cluster has a designated aggregation node (cluster head). An inter-cluster head routes are then formed so that the BS experiences low traffic volume and does not become distinguishable among the WSN nodes. The simulation results confirm the effectiveness of the proposed technique in boosting the anonymity of the BS.
Keywords: military communication; telecommunication network routing; telecommunication traffic; wireless sensor networks; WSN nodes; anonymity attacks; clustering-based mitigation; evidence theory; in-situ base-station; military battlefield; security surveillance; Measurement; Optimized production technology; Receivers; Routing; Security; Topology; Wireless sensor networks (ID#: 16-10251)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7417501&isnumber=7416057

 

Wei Li, Weiyi Qian and Mingqiang Yin, “Portfolio Selection Models in Uncertain Environment,” Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, Zhangjiajie, 2015, pp. 471-475. doi: 10.1109/FSKD.2015.7381988
Abstract: It is difficult that the security returns are reflected by previous data for portfolio selection (PS) problems. In order to overcome this, we take security returns as uncertain variables. In this paper, two portfolio selection models are presented in uncertain environment. In order to express divergence, the cross-entropy of uncertain variables is introduced into these mathematical models. In two models, we use expected value to express the investment return. At the same time, variance or semivariance expresses the risk, respectively. The mathematical models are solved by the gravitation search algorithm proposed by E. Rashedi. We apply the proposed models to two examples to exhibit effectiveness and correctness of the proposed models.
Keywords: entropy; investment; search problems; gravitation search algorithm; investment return; mathematical models; portfolio selection models; uncertain environment; uncertain variables cross-entropy; Force; Investment; Mathematical model; Measurement uncertainty; Portfolios; Security; Uncertainty; cross-entropy; gravitation search algorithm; portfolio selection problem; uncertain measure (ID#: 16-10252)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381988&isnumber=7381900

 

J. R. Ward and M. Younis, “A Cross-Layer Defense Scheme for Countering Traffic Analysis Attacks in Wireless Sensor Networks,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 972-977. doi: 10.1109/MILCOM.2015.7357571
Abstract: In most Wireless Sensor Network (WSN) applications the sensors forward their readings to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary's attack. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to locate the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Published anonymity-boosting techniques mainly focus on a single layer of the communication protocol stack and assume that changes in the protocol operation will not be detectable. In fact, existing single-layer techniques may not be able to protect the network if the adversary could guess what anonymity measure is being applied by identifying which layer is being exploited. In this paper we propose combining physical-layer and network-layer techniques to boost the network resilience to anonymity attacks. Our cross-layer approach avoids the shortcomings of the individual single-layer schemes and allows a WSN to effectively mask its behavior and simultaneously misdirect the adversary's attention away from the BS's location. We confirm the effectiveness of our cross-layer anti-traffic analysis measure using simulation.
Keywords: cryptographic protocols; telecommunication security; telecommunication traffic; wireless sensor networks; WSN; anonymity-boosting techniques; authentication; base station; central sink; communication protocol; cross-layer defense scheme; encryption; network-layer techniques; physical-layer techniques; single-layer techniques; traffic analysis attacks; traffic analysis techniques; Array signal processing; Computer security; Measurement; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-10253)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357571&isnumber=7357245

 

C. Moreno, S. Kauffman and S. Fischmeister, “Efficient Program Tracing and Monitoring Through Power Consumption — wth a Little Help from the Compiler,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1556-1561. doi: (not included).
Abstract: Ensuring correctness and enforcing security are growing concerns given the complexity of modern connected devices and safety-critical systems. A promising approach is non-intrusive runtime monitoring through reconstruction of program execution traces from power consumption measurements. This can be used for verification, validation, debugging, and security purposes. In this paper, we propose a framework for increasing the effectiveness of power-based program tracing techniques. These systems determine the most likely block of source code that produced an observed power trace (CPU power consumption as a function of time). Our framework maximizes distinguishability between power traces for different code blocks. To this end, we provide a special compiler optimization stage that reorders intermediate representation (IR) and determines the reorderings that lead to power traces with highest distances between each other, thus reducing the probability of misclassification. Our work includes an experimental evaluation, using LLVM for an ARM architecture. Experimental results confirm the effectiveness of our technique.
Keywords: optimisation; power consumption; probability; program compilers; program diagnostics; safety-critical software; IR; compiler optimization stage; distinguishability maximization; intermediate representation; misclassification probability; power consumption measurement; program compiler; program execution trace reconstruction; program monitoring; program tracing; safety-critical system; Electronic mail; Monitoring; Optimization; Power demand; Power measurement; Security; Training (ID#: 16-10254)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7459561&isnumber=7459269

 

L. Zhang, D. Chen, Y. Cao and X. Zhao, “A Practical Method to Determine Achievable Rates for Secure Steganography,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1274-1281. doi: 10.1109/HPCC-CSS-ICESS.2015.62
Abstract: With a chosen steganographic method and a cover image, the steganographer always hesitates about how many bits should be embedded. Though there have been works on theoretical capacity analysis, it is still difficult to apply them in practice. In this paper, we propose a practical method to determine the appropriate hiding rate of a cover image with the purpose of evading possible statistical detections. The core of this method is a non-linear regression, which is used to learn the mapping between the detection rate and the estimated rate with respect to a specific steganographic method. In order to deal with images with different visual contents, multiple regression functions are trained based on image groups with different texture complexity levels. To demonstrate the effectiveness of the proposed method, estimators are constructed for selected steganographic algorithms for both spatial and JPEG transform domains.
Keywords: image watermarking; regression analysis; steganography; transforms; JPEG transform domain; multiple regression function; nonlinear regression method; secure steganography; specific steganographic method; statistical detection; texture complexity level; theoretical capacity analysis; Complexity theory; Entropy; Measurement; Payloads; Security; Transform coding; Yttrium; capacity analysis; estimated rate; non-linear regression (ID#: 16-10255)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336343&isnumber=7336120

 

N. J. Ahuja and I. Singh, “Innovative Road Map for Leveraging ICT Enabled Tools for Energy Efficiency — from Awareness to Adoption,” Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 702-707. doi: 10.1109/ICACCE.2015.45
Abstract: Educating the energy efficiency measures at grass root levels, ranging from awareness to adoption, is the need of the hour and a very significant step towards energy security. The present work proposes a project-oriented approach based roadmap for the same. The approach initiates with a pre-survey of energy users, in terms of understanding their awareness level, current energy consumption patterns, and ascertaining their proposed adoption level towards innovative energy efficiency measures. It also assesses their interest towards different IT tools and mechanisms including their interface design preferences. Material designed, custom-tailored as per the needs of the users, is proposed to be delivered through identified IT methods. A post-survey done after an active IT intervention period proposes to bring out the variation from the pre-survey. Finally, use of analytical tools in concluding phase adjudges the interventions' effectiveness in terms of awareness generation, technology adoption level, change in energy consumption patterns, and energy savings.
Keywords: energy conservation; energy consumption; power aware computing; power engineering computing; user interfaces; ICT enabled tool; energy consumption pattern; energy efficiency; energy security; innovative road map; interface design preference; project-oriented approach; Current measurement; Energy consumption; Energy efficiency; Energy measurement; Mobile applications; Portals; Training; Computer Based Training; Energy Efficiency; ICT adoption; Mobile applications; Web-based Applications (ID#: 16-10256)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306773&isnumber=7306547

 

M. J. F. Alenazi and J. P. G. Sterbenz, “Comprehensive Comparison and Accuracy of Graph Metrics in Predicting Network Resilience,” Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, Kansas City, MO, 2015, pp. 157-164. doi: 10.1109/DRCN.2015.7149007
Abstract: Graph robustness metrics have been used largely to study the behavior of communication networks in the presence of targeted attacks and random failures. Several researchers have proposed new graph metrics to better predict network resilience and survivability against such attacks. Most of these metrics have been compared to a few established graph metrics for evaluating the effectiveness of measuring network resilience. In this paper, we perform a comprehensive comparison of the most commonly used graph robustness metrics. First, we show how each metric is determined and calculate its values for baseline graphs. Using several types of random graphs, we study the accuracy of each  robustness metric in predicting network resilience against centrality-based attacks. The results show three conclusions. First, our path diversity metric has the highest accuracy in predicting network resilience for structured baseline graphs. Second, the variance of node-betweenness centrality has mostly the best accuracy in predicting network resilience for Waxman random graphs. Third, path diversity, network criticality, and effective graph resistance have high accuracy in measuring network resilience for Gabriel graphs.
Keywords: graph theory; telecommunication network reliability; telecommunication security; Gabriel graphs; Waxman random graphs; baseline graphs; centrality-based attacks; communication network behavior; comprehensive comparison; effective graph resistance; graph robustness metrics accuracy; network criticality; network resilience measurement; network resilience prediction; node-betweenness centrality variance; path diversity metric; random failures; survivability prediction; targeted attacks; Accuracy; Communication networks; Joining processes; Measurement; Resilience; Robustness; Connectivity evaluation; Fault tolerance; Graph robustness; Graph spectra; Network design; Network resilience; Network science; Reliability; Survivability (ID#: 16-10257)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7149007&isnumber=7148972

 

J. Wang, M. Zhao, Q. Zeng, D. Wu and P. Liu, “Risk Assessment of Buffer ‘Heartbleed’ Over-Read Vulnerabilities,” 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Rio de Janeiro, 2015, pp. 555-562. doi: 10.1109/DSN.2015.59
Abstract: Buffer over-read vulnerabilities (e.g., Heartbleed) can lead to serious information leakage and monetary lost. Most of previous approaches focus on buffer overflow (i.e., over-write), which are either infeasible (e.g., canary) or impractical (e.g., bounds checking) in dealing with over-read vulnerabilities. As an emerging type of vulnerability, people need in-depth understanding of buffer over-read: the vulnerability, the security risk and the defense methods. This paper presents a systematic methodology to evaluate the potential risks of unknown buffer over-read vulnerabilities. Specifically, we model the buffer over-read vulnerabilities and focus on the quantification of how much information can be potentially leaked. We perform risk assessment using the RUBiS benchmark which is an auction site prototype modeled after eBay.com. We evaluate the effectiveness and performance of a few mitigation techniques and conduct a quantitative risk measurement study. We find that even simple techniques can achieve significant reduction on information leakage against over-read with reasonable performance penalty. We summarize our experience learned from the study, hoping to facilitate further studies on the over-read vulnerability.
Keywords: Internet; risk management; security of data; Heartbleed; buffer over-read vulnerabilities; defense method; information leakage; monetary lost; risk assessment; security risk; vulnerability method; Benchmark testing; Entropy; Heart rate variability; Measurement; Memory management; Payloads; Risk management (ID#: 16-10258)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266882&isnumber=7266818

 

K. Z. Ye, E. M. Portnov, L. G. Gagarina and K. Z. Lin, “Method for Increasing Reliability for Transmission State of Power Equipment Energy,” 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, 2015,
pp. 433-437. doi: 10.1109/GlobalSIP.2015.7418232
Abstract: In this paper the problems of transmitting trustworthy monitoring and control signals through the communication channels of sophisticated telemechanics systems using the IEC 60870-5-101 (104) are debated (104). Mathematically justified discrepancy between concepts of “information veracity” and “information protection from noise in communication channel” is shown. Principles of combined encoding ensuring high level of veracity of systems intended for energy supply are proposed. The paper also presents a methodology for estimating the level of veracity of information signals of systems used in telemechanics and the results of experimental studies of proposed encoding principles effectiveness.
Keywords: IEC standards; encoding; power apparatus; power system measurement; power transmission control; power transmission reliability; protocols; security of data; IEC 60870-5-101 (104); combined encoding; communication channels; control signal transmission; energy supply; information protection; information signal veracity; monitoring signal transmission; power equipment energy transmission state reliability; telemechanics systems; Communication channels; Distortion; Distortion measurement; Encoding; IEC Standards; Information processing; Probability; biimpulse conditionally correlational code; communication channel; information veracity; protocol IEC 608705-101 (104); reliability; telemechanics system (ID#: 16-10259)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7418232&isnumber=7416920

 

I. Kiss, B. Genge, P. Haller and G. Sebestyén, “A Framework for Testing Stealthy Attacks in Energy Grids,” Intelligent Computer Communication and Processing (ICCP), 2015 IEEE International Conference on, Cluj-Napoca, 2015, pp. 553-560. doi: 10.1109/ICCP.2015.7312718
Abstract: The progressive integration of traditional Information and Communication Technologies (ICT) hardware and software into the supervisory control of modern Power Grids (PG) has given birth to a unique technological ecosystem. Modern ICT handles a wide variety of advantageous services in PG, but in turn exposes PG to significant cyber threats. To ensure security, PG use various anomaly detection modules to detect the malicious effects of cyber attacks. In many reported cases the newly appeared targeted cyber-physical attacks can remain stealthy even in presence of anomaly detection systems. In this paper we present a framework for elaborating stealthy attacks against the critical infrastructure of power grids. Using the proposed framework, experts can verify the effectiveness of the applied anomaly detection systems (ADS) either in real or simulated environments. The novelty of the technique relies in the fact that the developed “smart” power grid cyber attack (SPGCA) first reveals the devices which can be compromised causing only a limited effect observed by ADS and PG operators. Compromising low impact devices first conducts the PG to a more sensitive and near unstable state, which leads to high damages when the attacker at last compromises high impact devices, e.g. breaking high demand power lines to cause blackout. The presented technique should be used to strengthen the deployment of ADS and to define various security zones to defend PG against such intelligent cyber attacks. Experimental results based on the IEEE 14-bus electricity grid model demonstrate the effectiveness of the framework.
Keywords: computer network security; power engineering computing; power system control; power system reliability; power system simulation; smart power grids; ADS; ICT hardware; IEEE 14-bus electricity grid model; PG operators; SPGCA; anomaly detection modules; anomaly detection systems; cyber threats; cyber-physical attacks; energy grids; information and communication technologies; intelligent cyber attacks; power grids; power lines; smart power grid cyber attack; stealthy attacks; supervisory control; Actuators; Phasor measurement units; Power grids; Process control; Sensors; Voltage measurement; Yttrium; Anomaly Detection; Control Variable; Cyber Attack; Impact Assessment; Observed Variable; Power Grid (ID#: 16-10260)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312718&isnumber=7312586

 

X. Lu, S. Wang, W. Li, P. Jiang and C. Zhang, “Development of a WSN Based Real Time Energy Monitoring Platform for Industrial Applications,” Computer Supported Cooperative Work in Design (CSCWD), 2015 IEEE 19th International Conference on, Calabria, 2015, pp. 337-342. doi: 10.1109/CSCWD.2015.7230982
Abstract: In recent years, with significantly increasing pressures from both energy price and the scarcity of energy resources have dramatically raised sustainability awareness in the industrial sector where the effective energy efficient process planning and scheduling are urgently demanded. To response this trend, development of a low cost, high accuracy, great flexibility and distributed real time energy monitoring platform is imperative. This paper presents the design, implementation, and testing of a remote energy monitoring system to support energy efficient sustainable manufacturing in an industrial workshop based on a hierarchical network architecture by integrating WSNs with Internet communication into a knowledge and information services platform. In order to verify the feasibility and effectiveness of the proposed system, the system has been implemented in a real shop floor to evaluate with various production processes. The assessing results showed that the proposed system has significance in practice of discovering energy relationships between various manufacturing processes which can be used to support for machining scheme selection, energy saving discovery and energy quota allocation in a shop floor.
Keywords: Internet; energy conservation; information services; machining; manufacturing processes; power engineering computing; power system measurement; pricing; sustainable development; wireless sensor networks; Internet communication; WSN based real time energy monitoring platform; energy efficient process planning; energy efficient process scheduling; energy price; energy quota allocation; energy resource scarcity; energy saving discovery; industrial applications; information services platform; machining scheme selection; manufacturing process; sustainability awareness; wireless sensor network; Communication system security; Electric variables measurement; Manufacturing; Monitoring; Planning; Wireless communication; Wireless sensor networks; Cloud service; Wireless sensor network; energy monitoring; sustainable manufacturing (ID#: 16-10261)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230982&isnumber=7230917

 

P. H. Yang and S. M. Yen, “Memory Attestation of Wireless Sensor Nodes by Trusted Local Agents,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 82-89. doi: 10.1109/Trustcom.2015.360
Abstract: Wireless Sensor Networks (WSNs) have been deployed for a wide variety of commercial, scientific, or military applications for the purposes of surveillance and critical data collection. Malicious code injection is a serious threat to the sensor nodes which enable fake data delivery or private data disclosure. The technique of memory attestation used to verify the integrity of a device's firmware is a potential solution against the aforementioned threat, and among which low cost software-based schemes are particularly suitable for protecting the resource-constraint sensor nodes. Unfortunately, the software-based attestation usually requires additional mechanisms to provide a reliable protection when the sensor nodes communicate with the verifier via multi-hop. Alternative hardware-based attestation (e.g., TPM) guarantees a reliable integrity measurement while it is impractical for the WSN applications primary due to the high computational overhead and hardware cost. This paper proposes a lightweight hardware-based memory attestation scheme by employing a simple tamper-resistant trusted local agent which is free from any cryptographic computation and is particularly suitable for the sensor nodes. The experimental results show the effectiveness of the proposed scheme.
Keywords: cryptography; firmware; telecommunication network reliability; telecommunication security; wireless sensor networks; WSN; computational overhead; cryptographic computation; device firmware; fake data delivery; hardware cost; hardware-based attestation; lightweight hardware-based memory attestation; low cost software-based schemes; malicious code injection; private data disclosure; reliable integrity measurement; reliable protection; resource-constraint sensor nodes; simple tamper-resistant trusted local agent; software-based attestation; trusted local agents; wireless sensor nodes; Base stations; Clocks; Hardware; Protocols; Security; Wireless sensor networks; Attestation; malicious code; trusted platform (ID#: 16-10262)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345268&isnumber=7345233
 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.