Biblio
During software evolution, it is important to evolve not only the source code, but also its architecture to prevent architecture drift and architecture erosion. This is a complex activity, especially for large software projects, with multiple development teams that might be located in different countries or on different continents. To ease this kind of evolution, we have developed a domain-specific language for making decisions about the evolution. It supports the definition of architectural changes based on multiple implementation tasks that can have temporal dependencies among each other. Then, by means of a model-to-model transformation, we automatically create a constraint model that we use to generate, by means of the Alloy model analyzer, the possible alternative decisions for executing the implementation tasks. The tight integration with architecture abstractions enables architects to automatically check the changes related to an implementation task in relation to the architecture description. This helps keeping architecture and code in sync, avoiding drift and erosion.
Presented at NSA Science of Security Quarterly Meeting, July 2014.
Presented at the Illinois SoS Bi-weekly Meeting, December 2014.
As information security became an increasing concern for software developers and users, requirements engineering (RE) researchers brought new insight to security requirements. Security requirements aim to address security at the early stages of system design while accommodating the complex needs of different stakeholders. Meanwhile, other research communities, such as usable privacy and security, have also examined these requirements with specialized goal to make security more usable for stakeholders from product owners, to system users and administrators. In this paper we report results from conducting a literature survey to compare security requirements research from RE Conferences with the Symposium on Usable Privacy and Security (SOUPS). We report similarities between the two research areas, such as common goals, technical definitions, research problems, and directions. Further, we clarify the differences between these two communities to understand how they can leverage each other’s insights. From our analysis, we recommend new directions in security requirements research mainly to expand the meaning of security requirements in RE to reflect the technological advancements that the broader field of security is experiencing. These recommendations to encourage cross- collaboration with other communities are not limited to the security requirements area; in fact, we believe they can be generalized to other areas of RE.
The process of software development and evolution has proven difficult to improve. For example, well documented security issues such as SQL injection (SQLi), after more than a decade, still top most vulnerability lists. Quantitative security process and quality metrics are often subdued due to lack of time and resources. Security problems are hard to quantify and even harder to predict or relate to any process improvement activity. The goal of this thesis is to assess usefulness of “classical” software reliability engineering (SRE) models in the context of open source software security, the conditions under which they may be useful, and the information that they can provide with respect to the security quality of a software product. We start with security problem reports for open source Fedora series of software releases.We illustrate how one can learn from normal operational profile about the non-operational processes related to security problems. One aspect is classification of security problems based on the human traits that contribute to the injection of problems into code, whether due to poor practices or limited knowledge (epistemic errors), or due to random accidental events (aleatoric errors). Knowing the distribution aids in development of an attack profile. In the case of Fedora, the distribution of security problems found post-release was consistent across four different releases of the software. The security problem discovery rate appears to be roughly constant but much lower than the initial non-security problem discovery rate. Previous work has shown that non-operational testing can help accelerate and focus the problem discovery rate and that it can be successfully modeled.We find that some classical reliability models can be used with success to estimate the residual number of security problems, and through that provide a measure of the security characteristics of the software. We propose an agile software testing process that combines operational and non-operational (or attack related) testing with the intent of finding more security problems faster.
Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users' security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.
In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers' private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.
Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.
The objective of this paper is to explore the current notions of systems and “System of Systems” and establish the case for quantitative characterization of their structural, behavioural and contextual facets that will pave the way for further formal development (mathematical formulation). This is partly driven by stakeholder needs and perspectives and also in response to the necessity to attribute and communicate the properties of a system more succinctly, meaningfully and efficiently. The systematic quantitative characterization framework proposed will endeavor to extend the notion of emergence that allows the definition of appropriate metrics in the context of a number of systems ontologies. The general characteristic and information content of the ontologies relevant to system and system of system will be specified but not developed at this stage. The current supra-system, system and sub-system hierarchy is also explored for the formalisation of a standard notation in order to depict a relative scale and order and avoid the seemingly arbitrary attributions.
This paper argues about a new conceptual modeling language for the White-Box (WB) security analysis. In the WB security domain, an attacker may have access to the inner structure of an application or even the entire binary code. It becomes pretty easy for attackers to inspect, reverse engineer, and tamper the application with the information they steal. The basis of this paper is the 14 patterns developed by a leading provider of software protection technologies and solutions. We provide a part of a new modeling language named i-WBS (White-Box Security) to describe problems of WB security better. The essence of White-Box security problem is code security. We made the new modeling language focus on code more than ever before. In this way, developers who are not security experts can easily understand what they need to really protect.
In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs – this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.
Specifics of an alias-free digitizer application for compressed digitizing and recording of wideband signals are considered. Signal sampling in this case is performed on the basis of picosecond resolution event timing, the digitizer actually is a subsystem of Event Timer A033-ET and specific events that are detected and then timed are the signal and reference sine-wave crossings. The used approach to development of this subsystem is described and some results of experimental studies are given.
Rogue software, such as Fake A/V and ransomware, trick users into paying without giving return. We show that using a perceptual hash function and hierarchical clustering, more than 213,671 screenshots of executed malware samples can be grouped into subsets of structurally similar images, reflecting image clusters of one malware family or campaign. Based on the clustering results, we show that ransomware campaigns favor prepay payment methods such as ukash, paysafecard and moneypak, while Fake A/V campaigns use credit cards for payment. Furthermore, especially given the low A/V detection rates of current rogue software – sometimes even as low as 11% – our screenshot analysis approach could serve as a complementary last line of defense.
It can get the user's privacy and home energy use information by analyzing the user's electrical load information in smart grid, and this is an area of concern. A rechargeable battery may be used in the home network to protect user's privacy. In this paper, the battery can neither charge nor discharge, and the power of battery is adjustable, at the same time, we model the real user's electrical load information and the battery power information and the recorded electrical power of smart meters which are processed with discrete way. Then we put forward a heuristic algorithm which can make the rate of information leakage less than existing solutions. We use statistical methods to protect user's privacy, the theoretical analysis and the examples show that our solution makes the scene design more reasonable and is more effective than existing solutions to avoid the leakage of the privacy.
This paper proposes a steganography method using the digital images. Here, we are embedding the data which is to be secured into the digital image. Human Visual System proved that the changes in the image edges are insensitive to human eyes. Therefore we are using edge detection method in steganography to increase data hiding capacity by embedding more data in these edge pixels. So, if we can increase number of edge pixels, we can increase the amount of data that can be hidden in the image. To increase the number of edge pixels, multiple edge detection is employed. Edge detection is carried out using more sophisticated operator like canny operator. To compensate for the resulting decrease in the PSNR because of increase in the amount of data hidden, Minimum Error Replacement [MER] method is used. Therefore, the main goal of image steganography i.e. security with highest embedding capacity and good visual qualities are achieved. To extract the data we need the original image and the embedding ratio. Extraction is done by taking multiple edges detecting the original image and the data is extracted corresponding to the embedding ratio.