Biblio
Software metrics are widely used to measure the quality of software and to give an early indication of the efficiency of the development process in industry. There are many well-established frameworks for measuring the quality of source code through metrics, but limited attention has been paid to the quality of software models. In this article, we evaluate the quality of state machine models specified using the Analytical Software Design (ASD) tooling. We discuss how we applied a number of metrics to ASD models in an industrial setting and report about results and lessons learned while collecting these metrics. Furthermore, we recommend some quality limits for each metric and validate them on models developed in a number of industrial projects.
This paper concerns the role of human errors in the field of Early Risk assessment in Software Project Management. Researchers have recently begun to focus on human errors in early risk assessment in large software projects; statistics show it to be major components of problems in software over 80% of economic losses are attributed to this problem. There has been comparatively diminutive experimental research on the role of human errors in this context, particularly evident at the organizational level, largely because of reluctance to share information and statistics on security issues in online software application. Grounded theory has been employed to investigate the main root of human errors in online security risks as a research methodology. An open-ended question was asked of 103 information security experts around the globe and the responses used to develop a list of human errors causes by open coding. The paper represents a contribution to our understanding of the causes of human errors in information security contexts. It is also one of the first information security research studies of the kind utilizing Strauss and Glaser's grounded theory approaches together, during data collection phases to achieve the required number of participants' responses and is a significant contribution to the field.
This paper concerns the role of human errors in the field of Early Risk assessment in Software Project Management. Researchers have recently begun to focus on human errors in early risk assessment in large software projects; statistics show it to be major components of problems in software over 80% of economic losses are attributed to this problem. There has been comparatively diminutive experimental research on the role of human errors in this context, particularly evident at the organizational level, largely because of reluctance to share information and statistics on security issues in online software application. Grounded theory has been employed to investigate the main root of human errors in online security risks as a research methodology. An open-ended question was asked of 103 information security experts around the globe and the responses used to develop a list of human errors causes by open coding. The paper represents a contribution to our understanding of the causes of human errors in information security contexts. It is also one of the first information security research studies of the kind utilizing Strauss and Glaser's grounded theory approaches together, during data collection phases to achieve the required number of participants' responses and is a significant contribution to the field.
Since the security and fault tolerance is the two important metrics of the data storage, it brings both opportunities and challenges for distributed data storage and transaction. The traditional transaction system of storage resources, which generally runs in a centralized mode, results in high cost, vendor lock-in, single point failure risk, DDoS attack and information security. Therefore, this paper proposes a distributed transaction method for cloud storage based on smart contract. First, to guarantee the fault tolerance and decrease the storing cost for erasure coding, a VCG-based auction mechanism is proposed for storage transaction, and we deploy and implement the proposed mechanism by designing a corresponding smart contract. Especially, we address the problem - how to implement a VCG-like mechanism in a blockchain environment. Based on private chain of Ethereum, we make the simulations for proposed storage transaction method. The results showed that proposed transaction model can realize competitive trading of storage resources, and ensure the safe and economic operation of resource trading.
We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.
Program analysis on binary code is considered as difficult because one has to resolve destinations of indirect jumps. However, there is another difficulty of context-dependency that matters when one processes binary programs that are not compiler generated. In this paper, we propose a novel approach for tackling these difficulties and describe a way to reconstruct a control flow from a binary program with no extra assumptions than the operational meaning of machine instructions.
Ensuring software security is essential for developing a reliable software. A software can suffer from security problems due to the weakness in code constructs during software development. Our goal is to relate software security with different code constructs so that developers can be aware very early of their coding weaknesses that might be related to a software vulnerability. In this study, we chose Java nano-patterns as code constructs that are method-level patterns defined on the attributes of Java methods. This study aims to find out the correlation between software vulnerability and method-level structural code constructs known as nano-patterns. We found the vulnerable methods from 39 versions of three major releases of Apache Tomcat for our first case study. We extracted nano-patterns from the affected methods of these releases. We also extracted nano-patterns from the non-vulnerable methods of Apache Tomcat, and for this, we selected the last version of three major releases (6.0.45 for release 6, 7.0.69 for release 7 and 8.0.33 for release 8) as the non-vulnerable versions. Then, we compared the nano-pattern distributions in vulnerable versus non-vulnerable methods. In our second case study, we extracted nano-patterns from the affected methods of three vulnerable J2EE web applications: Blueblog 1.0, Personalblog 1.2.6 and Roller 0.9.9, all of which were deliberately made vulnerable for testing purpose. We found that some nano-patterns such as objCreator, staticFieldReader, typeManipulator, looper, exceptions, localWriter, arrReader are more prevalent in affected methods whereas some such as straightLine are more vivid in non-affected methods. We conclude that nano-patterns can be used as the indicator of vulnerability-proneness of code.
For streaming applications, we consider parallel burst erasure channels in the presence of an eavesdropper. The legitimate receiver must perfectly recover each source symbol subject to a decoding delay constraint without the eavesdropper gaining any information from his observation. For a certain class of code parameters, we propose delay-optimal M-link codes that recover multiple bursts of erasures of a limited length, and where the codes provide perfect security even if the eavesdropper can observe a link of his choice. Our codes achieve the maximum secrecy rate for the channel model.
In recent years, Edge Computing (EC) has attracted increasing attention for its advantages in handling latencysensitive and compute-intensive applications. It is becoming a widespread solution to solve the last mile problem of cloud computing. However, in actual EC deployments, data confidentiality becomes an unignorable issue because edge devices may be untrusted. In this paper, a secure and efficient edge computing scheme based on linear coding is proposed. Generally, linear coding can be utilized to achieve data confidentiality by encoding random blocks with original data blocks before they are distributed to unreliable edge nodes. However, the addition of a large amount of irrelevant random blocks also brings great communication overhead and high decoding complexities. In this paper, we focus on the design of secure coded edge computing using orthogonal vector to protect the information theoretic security of the data matrix stored on edge nodes and the input matrix uploaded by the user device, while to further reduce the communication overhead and decoding complexities. In recent years, Edge Computing (EC) has attracted increasing attention for its advantages in handling latencysensitive and compute-intensive applications. It is becoming a widespread solution to solve the last mile problem of cloud computing. However, in actual EC deployments, data confidentiality becomes an unignorable issue because edge devices may be untrusted. In this paper, a secure and efficient edge computing scheme based on linear coding is proposed. Generally, linear coding can be utilized to achieve data confidentiality by encoding random blocks with original data blocks before they are distributed to unreliable edge nodes. However, the addition of a large amount of irrelevant random blocks also brings great communication overhead and high decoding complexities. In this paper, we focus on the design of secure coded edge computing using orthogonal vector to protect the information theoretic security of the data matrix stored on edge nodes and the input matrix uploaded by the user device, while to further reduce the communication overhead and decoding complexities.
Over the past few years we have articulated theory that describes ‘encrypted computing’, in which data remains in encrypted form while being worked on inside a processor, by virtue of a modified arithmetic. The last two years have seen research and development on a standards-compliant processor that shows that near-conventional speeds are attainable via this approach. Benchmark performance with the US AES-128 flagship encryption and a 1GHz clock is now equivalent to a 433MHz classic Pentium, and most block encryptions fit in AES's place. This summary article details how user data is protected by a system based on the processor from being read or interfered with by the computer operator, for those computing paradigms that entail trust in data-oriented computation in remote locations where it may be accessible to powerful and dishonest insiders. We combine: (i) the processor that runs encrypted; (ii) a slightly modified conventional machine code instruction set architecture with which security is achievable; (iii) an ‘obfuscating’ compiler that takes advantage of its possibilities, forming a three-point system that provably provides cryptographic "semantic security" for user data against the operator and system insiders.
To help establish a more scientific basis for security science, which will enable the development of fundamental theories and move the field from being primarily reactive to primarily proactive, it is important for research results to be reported in a scientifically rigorous manner. Such reporting will allow for the standard pillars of science, namely replication, meta-analysis, and theory building. In this paper we aim to establish a baseline of the state of scientific work in security through the analysis of indicators of scientific research as reported in the papers from the 2015 IEEE Symposium on Security and Privacy. To conduct this analysis, we developed a series of rubrics to determine the completeness of the papers relative to the type of evaluation used (e.g. case study, experiment, proof). Our findings showed that while papers are generally easy to read, they often do not explicitly document some key information like the research objectives, the process for choosing the cases to include in the studies, and the threats to validity. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
To help establish a more scientific basis for security science, which will enable the development of fundamental theories and move the field from being primarily reactive to primarily proactive, it is important for research results to be reported in a scientifically rigorous manner. Such reporting will allow for the standard pillars of science, namely replication, meta-analysis, and theory building. In this paper we aim to establish a baseline of the state of scientific work in security through the analysis of indicators of scientific research as reported in the papers from the 2015 IEEE Symposium on Security and Privacy. To conduct this analysis, we developed a series of rubrics to determine the completeness of the papers relative to the type of evaluation used (e.g. case study, experiment, proof). Our findings showed that while papers are generally easy to read, they often do not explicitly document some key information like the research objectives, the process for choosing the cases to include in the studies, and the threats to validity. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.
With the advent of massive machine type of communications, security protection becomes more important than ever. Efforts have been made to impose security protection capability to physical-layer signal design, so called physical-layer security (PLS). The purpose of this paper is to evaluate the performance of PLS schemes for a multi-input-multi-output (MIMO) systems with space-time block coding (STBC) under imperfect channel estimation. Three PLS schemes for STBC schemes are modeled and their bit error rate (BER) performances are evaluated under various channel estimation error environments, and their performance characteristics are analyzed.
ISSN: 2163-0771
Ze the quality of channels into either completely noisy or noieseless channels. This paper presents extrinsic information transfer (EXIT) analysis for iterative decoding of Polar codes to reveal the mechanism of channel transformation. The purpose of understanding the transformation process are to comprehend the placement process of information bit and frozen bit and to comprehend the security standard of Polar codes. Mutual information derived based on the concept of EXIT chart for check nodes and variable nodes of low density parity check (LDPC) codes and applied to Polar codes. This paper explores the quality of the polarized channels in finite blocklength. The finite block-length is of our interest since in the fifth telecommunications generation (5G) the block length is limited. This paper reveals the EXIT curve changes of Polar codes and explores the polarization characteristics, thus, high value of mutual informations for frozen bit are needed to be detectable. If it is the other way, the error correction capability of Polar codes would be drastically decreases. These results are expected to be a reference for developments of Polar codes for 5G technologies and beyond.
In modern societies, critical services such as transportation, power supply, water treatment and distribution are strongly dependent on Industrial Control Systems (ICS). As technology moves along, new features improve services provided by such ICS. On the other hand, this progress also introduces new risks of cyber attacks due to the multiple direct and indirect dependencies between cyber and physical components of such systems. Performing rigorous security tests and risk analysis in these critical systems is thus a challenging task, because of the non-trivial interactions between digital and physical assets and the domain-specific knowledge necessary to analyse a particular system. In this work, we propose a methodology to model and analyse a System Under Test (SUT) as a data flow graph that highlights interactions among internal entities throughout the SUT. This model is automatically extracted from production code available in Programmable Logic Controllers (PLCs). We also propose a reachability algorithm and an attack diagram that will emphasize the dependencies between cyber and physical domains, thus enabling a human analyst to gauge various attack vectors that arise from subtle dependencies in data and information propagation. We test our methodology in a functional water treatment testbed and demonstrate how an analyst could make use of our designed attack diagrams to reason on possible threats to various targets of the SUT.
With an increasing number of wireless devices, the risk of being eavesdropped increases as well. From information theory, it is well known that wiretap codes can asymptotically achieve vanishing decoding error probability at the legitimate receiver while also achieving vanishing leakage to eavesdroppers. However, under finite blocklength, there exists a tradeoff among different parameters of the transmission. In this work, we propose a flexible wiretap code design for Gaussian wiretap channels under finite blocklength by neural network autoencoders. We show that the proposed scheme has higher flexibility in terms of the error rate and leakage tradeoff, compared to the traditional codes.
A secure multi-party batch matrix multiplication problem (SMBMM) is considered, where the goal is to allow a master to efficiently compute the pairwise products of two batches of massive matrices, by distributing the computation across S servers. Any X colluding servers gain no information about the input, and the master gains no additional information about the input beyond the product. A solution called Generalized Cross Subspace Alignment codes with Noise Alignment (GCSA- NA) is proposed in this work, based on cross-subspace alignment codes. The state of art solution to SMBMM is a coding scheme called polynomial sharing (PS) that was proposed by Nodehi and Maddah-Ali. GCSA-NA outperforms PS codes in several key aspects - more efficient and secure inter-server communication, lower latency, flexible inter-server network topology, efficient batch processing, and tolerance to stragglers.
It is investigated how to achieve semantic security for the wiretap channel. A new type of functions called biregular irreducible (BRI) functions, similar to universal hash functions, is introduced. BRI functions provide a universal method of establishing secrecy. It is proved that the known secrecy rates of any discrete and Gaussian wiretap channel are achievable with semantic security by modular wiretap codes constructed from a BRI function and an error-correcting code. A characterization of BRI functions in terms of edge-disjoint biregular graphs on a common vertex set is derived. This is used to study examples of BRI functions and to construct new ones.
Game theory is appropriate for studying cyber conflict because it allows for an intelligent and goal-driven adversary. Applications of game theory have led to a number of results regarding optimal attack and defense strategies. However, the overwhelming majority of applications explore overly simplistic games, often ones in which each participant's actions are visible to every other participant. These simplifications strip away the fundamental properties of real cyber conflicts: probabilistic alerting, hidden actions, unknown opponent capabilities. In this paper, we demonstrate that it is possible to analyze a more realistic game, one in which different resources have different weaknesses, players have different exploits, and moves occur in secrecy, but they can be detected. Certainly, more advanced and complex games are possible, but the game presented here is more realistic than any other game we know of in the scientific literature. While optimal strategies can be found for simpler games using calculus, case-by-case analysis, or, for stochastic games, Q-learning, our more complex game is more naturally analyzed using the same methods used to study other complex games, such as checkers and chess. We define a simple evaluation function and employ multi-step searches to create strategies. We show that such scenarios can be analyzed, and find that in cases of extreme uncertainty, it is often better to ignore one's opponent's possible moves. Furthermore, we show that a simple evaluation function in a complex game can lead to interesting and nuanced strategies that follow tactics that tend to select moves that are well tuned to the details of the situation and the relative probabilities of success.