Visible to the public Biblio

Filters: Keyword is coding theory  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
A
Osaiweran, A., Marincic, J., Groote, J. F..  2017.  Assessing the Quality of Tabular State Machines through Metrics. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :426–433.

Software metrics are widely used to measure the quality of software and to give an early indication of the efficiency of the development process in industry. There are many well-established frameworks for measuring the quality of source code through metrics, but limited attention has been paid to the quality of software models. In this article, we evaluate the quality of state machine models specified using the Analytical Software Design (ASD) tooling. We discuss how we applied a number of metrics to ASD models in an industrial setting and report about results and lessons learned while collecting these metrics. Furthermore, we recommend some quality limits for each metric and validate them on models developed in a number of industrial projects.

C
Sharma, Seema, Ram, Babu.  2016.  Causes of Human Errors in Early Risk Assesment in Software Project Management. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :11:1–11:11.

This paper concerns the role of human errors in the field of Early Risk assessment in Software Project Management. Researchers have recently begun to focus on human errors in early risk assessment in large software projects; statistics show it to be major components of problems in software over 80% of economic losses are attributed to this problem. There has been comparatively diminutive experimental research on the role of human errors in this context, particularly evident at the organizational level, largely because of reluctance to share information and statistics on security issues in online software application. Grounded theory has been employed to investigate the main root of human errors in online security risks as a research methodology. An open-ended question was asked of 103 information security experts around the globe and the responses used to develop a list of human errors causes by open coding. The paper represents a contribution to our understanding of the causes of human errors in information security contexts. It is also one of the first information security research studies of the kind utilizing Strauss and Glaser's grounded theory approaches together, during data collection phases to achieve the required number of participants' responses and is a significant contribution to the field.

Sharma, Seema, Ram, Babu.  2016.  Causes of Human Errors in Early Risk Assesment in Software Project Management. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :11:1–11:11.

This paper concerns the role of human errors in the field of Early Risk assessment in Software Project Management. Researchers have recently begun to focus on human errors in early risk assessment in large software projects; statistics show it to be major components of problems in software over 80% of economic losses are attributed to this problem. There has been comparatively diminutive experimental research on the role of human errors in this context, particularly evident at the organizational level, largely because of reluctance to share information and statistics on security issues in online software application. Grounded theory has been employed to investigate the main root of human errors in online security risks as a research methodology. An open-ended question was asked of 103 information security experts around the globe and the responses used to develop a list of human errors causes by open coding. The paper represents a contribution to our understanding of the causes of human errors in information security contexts. It is also one of the first information security research studies of the kind utilizing Strauss and Glaser's grounded theory approaches together, during data collection phases to achieve the required number of participants' responses and is a significant contribution to the field.

Hao, Jie, Shum, Kenneth W., Xia, Shu-Tao, Yang, Yi-Xian.  2019.  Classification of Optimal Ternary (r, δ)-Locally Repairable Codes Attaining the Singleton-like Bound. 2019 IEEE International Symposium on Information Theory (ISIT). :2828—2832.
In a linear code, a code symbol with (r, δ)-locality can be repaired by accessing at most r other code symbols in case of at most δ - 1 erasures. A q-ary (n, k, r, δ) locally repairable codes (LRC) in which every code symbol has (r, δ)-locality is said to be optimal if it achieves the Singleton-like bound derived by Prakash et al.. In this paper, we study the classification of optimal ternary (n, k, r, δ)-LRCs (δ \textbackslashtextgreater 2). Firstly, we propose an upper bound on the minimum distance of optimal q-ary LRCs in terms of the field size. Then, we completely determine all the 6 classes of possible parameters with which optimal ternary (n, k, r, δ)-LRCs exist. Moreover, explicit constructions of all these 6 classes of optimal ternary LRCs are proposed in the paper.
Banse, Christian, Kunz, Immanuel, Schneider, Angelika, Weiss, Konrad.  2021.  Cloud Property Graph: Connecting Cloud Security Assessments with Static Code Analysis. 2021 IEEE 14th International Conference on Cloud Computing (CLOUD). :13—19.
In this paper, we present the Cloud Property Graph (CloudPG), which bridges the gap between static code analysis and runtime security assessment of cloud services. The CloudPG is able to resolve data flows between cloud applications deployed on different resources, and contextualizes the graph with runtime information, such as encryption settings. To provide a vendorand technology-independent representation of a cloud service's security posture, the graph is based on an ontology of cloud resources, their functionalities and security features. We show, using an example, that our CloudPG framework can be used by security experts to identify weaknesses in their cloud deployments, spanning multiple vendors or technologies, such as AWS, Azure and Kubernetes. This includes misconfigurations, such as publicly accessible storages or undesired data flows within a cloud service, as restricted by regulations such as GDPR.
Gu, Yonggen, Hou, Dingding, Wu, Xiaohong.  2018.  A Cloud Storage Resource Transaction Mechanism Based on Smart Contract. Proceedings of the 8th International Conference on Communication and Network Security. :134-138.

Since the security and fault tolerance is the two important metrics of the data storage, it brings both opportunities and challenges for distributed data storage and transaction. The traditional transaction system of storage resources, which generally runs in a centralized mode, results in high cost, vendor lock-in, single point failure risk, DDoS attack and information security. Therefore, this paper proposes a distributed transaction method for cloud storage based on smart contract. First, to guarantee the fault tolerance and decrease the storing cost for erasure coding, a VCG-based auction mechanism is proposed for storage transaction, and we deploy and implement the proposed mechanism by designing a corresponding smart contract. Especially, we address the problem - how to implement a VCG-like mechanism in a blockchain environment. Based on private chain of Ethereum, we make the simulations for proposed storage transaction method. The results showed that proposed transaction model can realize competitive trading of storage resources, and ensure the safe and economic operation of resource trading.

Braverman, Mark, Efremenko, Klim, Gelles, Ran, Haeupler, Bernhard.  2016.  Constant-rate Coding for Multiparty Interactive Communication is Impossible. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :999–1010.

We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.

Izumida, Tomonori, Mori, Akira, Hashimoto, Masatomo.  2018.  Context-Sensitive Flow Graph and Projective Single Assignment Form for Resolving Context-Dependency of Binary Code. Proceedings of the 13th Workshop on Programming Languages and Analysis for Security. :48-53.

Program analysis on binary code is considered as difficult because one has to resolve destinations of indirect jumps. However, there is another difficulty of context-dependency that matters when one processes binary programs that are not compiler generated. In this paper, we propose a novel approach for tackling these difficulties and describe a way to reconstruct a control flow from a binary program with no extra assumptions than the operational meaning of machine instructions.

Sultana, K. Z., Deo, A., Williams, B. J..  2017.  Correlation Analysis among Java Nano-Patterns and Software Vulnerabilities. 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE). :69–76.

Ensuring software security is essential for developing a reliable software. A software can suffer from security problems due to the weakness in code constructs during software development. Our goal is to relate software security with different code constructs so that developers can be aware very early of their coding weaknesses that might be related to a software vulnerability. In this study, we chose Java nano-patterns as code constructs that are method-level patterns defined on the attributes of Java methods. This study aims to find out the correlation between software vulnerability and method-level structural code constructs known as nano-patterns. We found the vulnerable methods from 39 versions of three major releases of Apache Tomcat for our first case study. We extracted nano-patterns from the affected methods of these releases. We also extracted nano-patterns from the non-vulnerable methods of Apache Tomcat, and for this, we selected the last version of three major releases (6.0.45 for release 6, 7.0.69 for release 7 and 8.0.33 for release 8) as the non-vulnerable versions. Then, we compared the nano-pattern distributions in vulnerable versus non-vulnerable methods. In our second case study, we extracted nano-patterns from the affected methods of three vulnerable J2EE web applications: Blueblog 1.0, Personalblog 1.2.6 and Roller 0.9.9, all of which were deliberately made vulnerable for testing purpose. We found that some nano-patterns such as objCreator, staticFieldReader, typeManipulator, looper, exceptions, localWriter, arrReader are more prevalent in affected methods whereas some such as straightLine are more vivid in non-affected methods. We conclude that nano-patterns can be used as the indicator of vulnerability-proneness of code.

Yudin, Oleksandr, Artemov, Volodymyr, Krasnorutsky, Andrii, Barannik, Vladimir, Tupitsya, Ivan, Pris, Gennady.  2021.  Creating a Mathematical Model for Estimating the Impact of Errors in the Process of Reconstruction of Non-Uniform Code Structures on the Quality of Recoverable Video Images. 2021 IEEE 3rd International Conference on Advanced Trends in Information Theory (ATIT). :40—45.
Existing compression coding technologies are investigated using a statistical approach. The fundamental strategies used in the process of statistical coding of video information data are analyzed. Factors that have a significant impact on the reliability and efficiency of video delivery in the process of statistical coding are analyzed. A model for estimating the impact of errors in the process of reconstruction of uneven code structures on the quality of recoverable video images is being developed.The influence of errors that occur in data transmission channels on the reliability of the reconstructed video image is investigated.
D
Frank, A..  2020.  Delay-Optimal Coding for Secure Transmission over Parallel Burst Erasure Channels with an Eavesdropper. 2020 IEEE International Symposium on Information Theory (ISIT). :960—965.

For streaming applications, we consider parallel burst erasure channels in the presence of an eavesdropper. The legitimate receiver must perfectly recover each source symbol subject to a decoding delay constraint without the eavesdropper gaining any information from his observation. For a certain class of code parameters, we propose delay-optimal M-link codes that recover multiple bursts of erasures of a limited length, and where the codes provide perfect security even if the eavesdropper can observe a link of his choice. Our codes achieve the maximum secrecy rate for the channel model.

Günlü, Onur, Kliewer, Jörg, Schaefer, Rafael F., Sidorenko, Vladimir.  2021.  Doubly-Exponential Identification via Channels: Code Constructions and Bounds. 2021 IEEE International Symposium on Information Theory (ISIT). :1147—1152.
Consider the identification (ID) via channels problem, where a receiver wants to decide whether the transmitted identifier is its identifier, rather than decoding the identifier. This model allows to transmit identifiers whose size scales doubly-exponentially in the blocklength, unlike common transmission (or channel) codes whose size scales exponentially. It suffices to use binary constant-weight codes (CWCs) to achieve the ID capacity. By relating the parameters of a binary CWC to the minimum distance of a code and using higher-order correlation moments, two upper bounds on the binary CWC size are proposed. These bounds are shown to be upper bounds also on the identifier sizes for ID codes constructed by using binary CWCs. We propose two code constructions based on optical orthogonal codes, which are used in optical multiple access schemes, have constant-weight codewords, and satisfy cyclic cross-correlation and autocorrelation constraints. These constructions are modified and concatenated with outer Reed-Solomon codes to propose new binary CWCs optimal for ID. Improvements to the finite-parameter performance of both our and existing code constructions are shown by using outer codes with larger minimum distance vs. blocklength ratios. We also illustrate ID performance regimes for which our ID code constructions perform significantly better than existing constructions.
E
Yudin, Oleksandr, Ziubina, Ruslana, Buchyk, Serhii, Frolov, Oleg, Suprun, Olha, Barannik, Natalia.  2019.  Efficiency Assessment of the Steganographic Coding Method with Indirect Integration of Critical Information. 2019 IEEE International Conference on Advanced Trends in Information Theory (ATIT). :36—40.
The presented method of encoding and steganographic embedding of a series of bits for the hidden message was first developed by modifying the digital platform (bases) of the elements of the image container. Unlike other methods, steganographic coding and embedding is accomplished by changing the elements of the image fragment, followed by the formation of code structures for the established structure of the digital representation of the structural elements of the image media image. The method of estimating quantitative indicators of embedded critical data is presented. The number of bits of the container for the developed method of steganographic coding and embedding of critical information is estimated. The efficiency of the presented method is evaluated and the comparative analysis of the value of the embedded digital data in relation to the method of weight coefficients of the discrete cosine transformation matrix, as well as the comparative analysis of the developed method of steganographic coding, compared with the Koch and Zhao methods to determine the embedded data resistance against attacks of various types. It is determined that for different values of the quantization coefficient, the most critical are the built-in containers of critical information, which are built by changing the part of the digital video data platform depending on the size of the digital platform and the number of bits of the built-in container.
Zhou, Wei, Wang, Jin, Li, Lingzhi, Wang, Jianping, Lu, Kejie, Zhou, Xiaobo.  2019.  An Efficient Secure Coded Edge Computing Scheme Using Orthogonal Vector. 2019 IEEE Intl Conf on Parallel Distributed Processing with Applications, Big Data Cloud Computing, Sustainable Computing Communications, Social Computing Networking (ISPA/BDCloud/SocialCom/SustainCom). :100—107.

In recent years, Edge Computing (EC) has attracted increasing attention for its advantages in handling latencysensitive and compute-intensive applications. It is becoming a widespread solution to solve the last mile problem of cloud computing. However, in actual EC deployments, data confidentiality becomes an unignorable issue because edge devices may be untrusted. In this paper, a secure and efficient edge computing scheme based on linear coding is proposed. Generally, linear coding can be utilized to achieve data confidentiality by encoding random blocks with original data blocks before they are distributed to unreliable edge nodes. However, the addition of a large amount of irrelevant random blocks also brings great communication overhead and high decoding complexities. In this paper, we focus on the design of secure coded edge computing using orthogonal vector to protect the information theoretic security of the data matrix stored on edge nodes and the input matrix uploaded by the user device, while to further reduce the communication overhead and decoding complexities. In recent years, Edge Computing (EC) has attracted increasing attention for its advantages in handling latencysensitive and compute-intensive applications. It is becoming a widespread solution to solve the last mile problem of cloud computing. However, in actual EC deployments, data confidentiality becomes an unignorable issue because edge devices may be untrusted. In this paper, a secure and efficient edge computing scheme based on linear coding is proposed. Generally, linear coding can be utilized to achieve data confidentiality by encoding random blocks with original data blocks before they are distributed to unreliable edge nodes. However, the addition of a large amount of irrelevant random blocks also brings great communication overhead and high decoding complexities. In this paper, we focus on the design of secure coded edge computing using orthogonal vector to protect the information theoretic security of the data matrix stored on edge nodes and the input matrix uploaded by the user device, while to further reduce the communication overhead and decoding complexities.

Breuer, P. T., Bowen, J. P., Palomar, E., Liu, Z..  2017.  Encrypted computing: Speed, security and provable obfuscation against insiders. 2017 International Carnahan Conference on Security Technology (ICCST). :1–6.

Over the past few years we have articulated theory that describes ‘encrypted computing’, in which data remains in encrypted form while being worked on inside a processor, by virtue of a modified arithmetic. The last two years have seen research and development on a standards-compliant processor that shows that near-conventional speeds are attainable via this approach. Benchmark performance with the US AES-128 flagship encryption and a 1GHz clock is now equivalent to a 433MHz classic Pentium, and most block encryptions fit in AES's place. This summary article details how user data is protected by a system based on the processor from being read or interfered with by the computer operator, for those computing paradigms that entail trust in data-oriented computation in remote locations where it may be accessible to powerful and dishonest insiders. We combine: (i) the processor that runs encrypted; (ii) a slightly modified conventional machine code instruction set architecture with which security is achievable; (iii) an ‘obfuscating’ compiler that takes advantage of its possibilities, forming a three-point system that provably provides cryptographic "semantic security" for user data against the operator and system insiders.

Carver, Jeffrey C., Burcham, Morgan, Kocak, Sedef Akinli, Bener, Ayse, Felderer, Michael, Gander, Matthias, King, Jason, Markkula, Jouni, Oivo, Markku, Sauerwein, Clemens et al..  2016.  Establishing a Baseline for Measuring Advancement in the Science of Security: An Analysis of the 2015 IEEE Security & Privacy Proceedings. Proceedings of the Symposium and Bootcamp on the Science of Security. :38–51.

To help establish a more scientific basis for security science, which will enable the development of fundamental theories and move the field from being primarily reactive to primarily proactive, it is important for research results to be reported in a scientifically rigorous manner. Such reporting will allow for the standard pillars of science, namely replication, meta-analysis, and theory building. In this paper we aim to establish a baseline of the state of scientific work in security through the analysis of indicators of scientific research as reported in the papers from the 2015 IEEE Symposium on Security and Privacy. To conduct this analysis, we developed a series of rubrics to determine the completeness of the papers relative to the type of evaluation used (e.g. case study, experiment, proof). Our findings showed that while papers are generally easy to read, they often do not explicitly document some key information like the research objectives, the process for choosing the cases to include in the studies, and the threats to validity. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.

Carver, Jeffrey C., Burcham, Morgan, Kocak, Sedef Akinli, Bener, Ayse, Felderer, Michael, Gander, Matthias, King, Jason, Markkula, Jouni, Oivo, Markku, Sauerwein, Clemens et al..  2016.  Establishing a Baseline for Measuring Advancement in the Science of Security: An Analysis of the 2015 IEEE Security & Privacy Proceedings. Proceedings of the Symposium and Bootcamp on the Science of Security. :38–51.

To help establish a more scientific basis for security science, which will enable the development of fundamental theories and move the field from being primarily reactive to primarily proactive, it is important for research results to be reported in a scientifically rigorous manner. Such reporting will allow for the standard pillars of science, namely replication, meta-analysis, and theory building. In this paper we aim to establish a baseline of the state of scientific work in security through the analysis of indicators of scientific research as reported in the papers from the 2015 IEEE Symposium on Security and Privacy. To conduct this analysis, we developed a series of rubrics to determine the completeness of the papers relative to the type of evaluation used (e.g. case study, experiment, proof). Our findings showed that while papers are generally easy to read, they often do not explicitly document some key information like the research objectives, the process for choosing the cases to include in the studies, and the threats to validity. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.

Hwang, Seunggyu, Lee, Hyein, Kim, Sooyoung.  2022.  Evaluation of physical-layer security schemes for space-time block coding under imperfect channel estimation. 2022 27th Asia Pacific Conference on Communications (APCC). :580–585.

With the advent of massive machine type of communications, security protection becomes more important than ever. Efforts have been made to impose security protection capability to physical-layer signal design, so called physical-layer security (PLS). The purpose of this paper is to evaluate the performance of PLS schemes for a multi-input-multi-output (MIMO) systems with space-time block coding (STBC) under imperfect channel estimation. Three PLS schemes for STBC schemes are modeled and their bit error rate (BER) performances are evaluated under various channel estimation error environments, and their performance characteristics are analyzed.

ISSN: 2163-0771

Yang, Hongna, Zhang, Yiwei.  2022.  On an extremal problem of regular graphs related to fractional repetition codes. 2022 IEEE International Symposium on Information Theory (ISIT). :1566–1571.
Fractional repetition (FR) codes are a special family of regenerating codes with the repair-by-transfer property. The constructions of FR codes are naturally related to combinatorial designs, graphs, and hypergraphs. Given the file size of an FR code, it is desirable to determine the minimum number of storage nodes needed. The problem is related to an extremal graph theory problem, which asks for the minimum number of vertices of an α-regular graph such that any subgraph with k vertices has at most δ edges. In this paper, we present a class of regular graphs for this problem to give the bounds for the minimum number of storage nodes for the FR codes.
ISSN: 2157-8117
Mufassa, Fauzil Halim, Anwar, Khoirul.  2019.  Extrinsic Information Transfer (EXIT) Analysis for Short Polar Codes. 2019 Symposium on Future Telecommunication Technologies (SOFTT). 1:1–6.

Ze the quality of channels into either completely noisy or noieseless channels. This paper presents extrinsic information transfer (EXIT) analysis for iterative decoding of Polar codes to reveal the mechanism of channel transformation. The purpose of understanding the transformation process are to comprehend the placement process of information bit and frozen bit and to comprehend the security standard of Polar codes. Mutual information derived based on the concept of EXIT chart for check nodes and variable nodes of low density parity check (LDPC) codes and applied to Polar codes. This paper explores the quality of the polarized channels in finite blocklength. The finite block-length is of our interest since in the fifth telecommunications generation (5G) the block length is limited. This paper reveals the EXIT curve changes of Polar codes and explores the polarization characteristics, thus, high value of mutual informations for frozen bit are needed to be detectable. If it is the other way, the error correction capability of Polar codes would be drastically decreases. These results are expected to be a reference for developments of Polar codes for 5G technologies and beyond.

F
Castellanos, John H., Ochoa, Martin, Zhou, Jianying.  2018.  Finding Dependencies Between Cyber-Physical Domains for Security Testing of Industrial Control Systems. Proceedings of the 34th Annual Computer Security Applications Conference. :582–594.

In modern societies, critical services such as transportation, power supply, water treatment and distribution are strongly dependent on Industrial Control Systems (ICS). As technology moves along, new features improve services provided by such ICS. On the other hand, this progress also introduces new risks of cyber attacks due to the multiple direct and indirect dependencies between cyber and physical components of such systems. Performing rigorous security tests and risk analysis in these critical systems is thus a challenging task, because of the non-trivial interactions between digital and physical assets and the domain-specific knowledge necessary to analyse a particular system. In this work, we propose a methodology to model and analyse a System Under Test (SUT) as a data flow graph that highlights interactions among internal entities throughout the SUT. This model is automatically extracted from production code available in Programmable Logic Controllers (PLCs). We also propose a reachability algorithm and an attack diagram that will emphasize the dependencies between cyber and physical domains, thus enabling a human analyst to gauge various attack vectors that arise from subtle dependencies in data and information propagation. We test our methodology in a functional water treatment testbed and demonstrate how an analyst could make use of our designed attack diagrams to reason on possible threats to various targets of the SUT.

Besser, Karl-Ludwig, Janda, Carsten R., Lin, Pin-Hsun, Jorswieck, Eduard A..  2019.  Flexible Design of Finite Blocklength Wiretap Codes by Autoencoders. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2512—2516.

With an increasing number of wireless devices, the risk of being eavesdropped increases as well. From information theory, it is well known that wiretap codes can asymptotically achieve vanishing decoding error probability at the legitimate receiver while also achieving vanishing leakage to eavesdroppers. However, under finite blocklength, there exists a tradeoff among different parameters of the transmission. In this work, we propose a flexible wiretap code design for Gaussian wiretap channels under finite blocklength by neural network autoencoders. We show that the proposed scheme has higher flexibility in terms of the error rate and leakage tradeoff, compared to the traditional codes.

G
Chen, Z., Jia, Z., Wang, Z., Jafar, S. A..  2020.  GCSA Codes with Noise Alignment for Secure Coded Multi-Party Batch Matrix Multiplication. 2020 IEEE International Symposium on Information Theory (ISIT). :227—232.

A secure multi-party batch matrix multiplication problem (SMBMM) is considered, where the goal is to allow a master to efficiently compute the pairwise products of two batches of massive matrices, by distributing the computation across S servers. Any X colluding servers gain no information about the input, and the master gains no additional information about the input beyond the product. A solution called Generalized Cross Subspace Alignment codes with Noise Alignment (GCSA- NA) is proposed in this work, based on cross-subspace alignment codes. The state of art solution to SMBMM is a coding scheme called polynomial sharing (PS) that was proposed by Nodehi and Maddah-Ali. GCSA-NA outperforms PS codes in several key aspects - more efficient and secure inter-server communication, lower latency, flexible inter-server network topology, efficient batch processing, and tolerance to stragglers.

Wiese, Moritz, Boche, Holger.  2019.  A Graph-Based Modular Coding Scheme Which Achieves Semantic Security. 2019 IEEE International Symposium on Information Theory (ISIT). :822–826.

It is investigated how to achieve semantic security for the wiretap channel. A new type of functions called biregular irreducible (BRI) functions, similar to universal hash functions, is introduced. BRI functions provide a universal method of establishing secrecy. It is proved that the known secrecy rates of any discrete and Gaussian wiretap channel are achievable with semantic security by modular wiretap codes constructed from a BRI function and an error-correcting code. A characterization of BRI functions in terms of edge-disjoint biregular graphs on a common vertex set is derived. This is used to study examples of BRI functions and to construct new ones.

H
Ferragut, Erik M., Brady, Andrew C., Brady, Ethan J., Ferragut, Jacob M., Ferragut, Nathan M., Wildgruber, Max C..  2016.  HackAttack: Game-Theoretic Analysis of Realistic Cyber Conflicts. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :8:1–8:8.

Game theory is appropriate for studying cyber conflict because it allows for an intelligent and goal-driven adversary. Applications of game theory have led to a number of results regarding optimal attack and defense strategies. However, the overwhelming majority of applications explore overly simplistic games, often ones in which each participant's actions are visible to every other participant. These simplifications strip away the fundamental properties of real cyber conflicts: probabilistic alerting, hidden actions, unknown opponent capabilities. In this paper, we demonstrate that it is possible to analyze a more realistic game, one in which different resources have different weaknesses, players have different exploits, and moves occur in secrecy, but they can be detected. Certainly, more advanced and complex games are possible, but the game presented here is more realistic than any other game we know of in the scientific literature. While optimal strategies can be found for simpler games using calculus, case-by-case analysis, or, for stochastic games, Q-learning, our more complex game is more naturally analyzed using the same methods used to study other complex games, such as checkers and chess. We define a simple evaluation function and employ multi-step searches to create strategies. We show that such scenarios can be analyzed, and find that in cases of extreme uncertainty, it is often better to ignore one's opponent's possible moves. Furthermore, we show that a simple evaluation function in a complex game can lead to interesting and nuanced strategies that follow tactics that tend to select moves that are well tuned to the details of the situation and the relative probabilities of success.