Biblio
Confidentiality, Integrity, and Availability are principal keys to build any secure software. Considering the security principles during the different software development phases would reduce software vulnerabilities. This paper measures the impact of the different software quality metrics on Confidentiality, Integrity, or Availability for any given object-oriented PHP application, which has a list of reported vulnerabilities. The National Vulnerability Database was used to provide the impact score on confidentiality, integrity, and availability for the reported vulnerabilities on the selected applications. This paper includes a study for these scores and its correlation with 25 code metrics for the given vulnerable source code. The achieved results were able to correlate 23.7% of the variability in `Integrity' to four metrics: Vocabulary Used in Code, Card and Agresti, Intelligent Content, and Efferent Coupling metrics. The Length (Halstead metric) could alone predict about 24.2 % of the observed variability in ` Availability'. The results indicate no significant correlation of `Confidentiality' with the tested code metrics.
Software insecurity is being identified as one of the leading causes of security breaches. In this paper, we revisited one of the strategies in solving software insecurity, which is the use of software quality metrics. We utilized a multilayer deep feedforward network in examining whether there is a combination of metrics that can predict the appearance of security-related bugs. We also applied the traditional machine learning algorithms such as decision tree, random forest, naïve bayes, and support vector machines and compared the results with that of the Deep Learning technique. The results have successfully demonstrated that it was possible to develop an effective predictive model to forecast software insecurity based on the software metrics and using Deep Learning. All the models generated have shown an accuracy of more than sixty percent with Deep Learning leading the list. This finding proved that utilizing Deep Learning methods and a combination of software metrics can be tapped to create a better forecasting model thereby aiding software developers in predicting security bugs.
Creating and implementing fault-tolerant distributed algorithms is a challenging task in highly safety-critical industries. Using formal methods supports design and development of complex algorithms. However, formal methods are often perceived as an unjustifiable overhead. This paper presents the experience and insights when using TLA+ and PlusCal to model and develop fault-tolerant and safety-critical modules for TAS Control Platform, a platform for railway control applications up to safety integrity level (SIL) 4. We show how formal methods helped us improve the correctness of the algorithms, improved development efficiency and how part of the gap between model and implementation has been closed by translation to C code. Additionally, we describe how we gained trust in the formal model and tools by following a specific design process called property-driven design, which also implicitly addresses software quality metrics such as code coverage metrics.