Visible to the public Biblio

Filters: Keyword is uncertain environment  [Clear All Filters]
2020-03-16
Lin, Kuo-Sui.  2019.  A New Evaluation Model for Information Security Risk Management of SCADA Systems. 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS). :757–762.
Supervisory control and data acquisition (SCADA) systems are becoming increasingly susceptible to cyber-physical attacks on both physical and cyber layers of critical information infrastructure. Failure Mode and Effects Analysis (FMEA) have been widely used as a structured method to prioritize all possible vulnerable areas (failure modes) for design review of security of information systems. However, traditional RPN based FMEA has some inherent problems. Besides, there is a lacking of application of FMEA for security in SCADAs under vague and uncertain environment. Thus, the main purpose of this study was to propose a new evaluation model, which not only intends to recover above mentioned problems, but also intends to evaluate, prioritize and correct security risk of SCADA system's threat modes. A numerical case study was also conducted to demonstrate that the proposed new evaluation model is not only capable of addressing FMEA's inherent problems but also is best suited for a semi-quantitative high level analysis of a secure SCADA's failure modes in the early design phases.
2018-07-06
Sun, R., Yuan, X., Lee, A., Bishop, M., Porter, D. E., Li, X., Gregio, A., Oliveira, D..  2017.  The dose makes the poison \#x2014; Leveraging uncertainty for effective malware detection. 2017 IEEE Conference on Dependable and Secure Computing. :123–130.

Malware has become sophisticated and organizations don't have a Plan B when standard lines of defense fail. These failures have devastating consequences for organizations, such as sensitive information being exfiltrated. A promising avenue for improving the effectiveness of behavioral-based malware detectors is to combine fast (usually not highly accurate) traditional machine learning (ML) detectors with high-accuracy, but time-consuming, deep learning (DL) models. The main idea is to place software receiving borderline classifications by traditional ML methods in an environment where uncertainty is added, while software is analyzed by time-consuming DL models. The goal of uncertainty is to rate-limit actions of potential malware during deep analysis. In this paper, we describe Chameleon, a Linux-based framework that implements this uncertain environment. Chameleon offers two environments for its OS processes: standard - for software identified as benign by traditional ML detectors - and uncertain - for software that received borderline classifications analyzed by ML methods. The uncertain environment will bring obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated Chameleon with 113 applications from common benchmarks and 100 malware samples for Linux. Our results show that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption (crashed or hampered). We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software.