Visible to the public Biblio

Filters: Keyword is information assurance  [Clear All Filters]
2019-08-05
Headrick, W. J., Dlugosz, A., Rajcok, P..  2018.  Information Assurance in modern ATE. 2018 IEEE AUTOTESTCON. :1–4.

For modern Automatic Test Equipment (ATE) one of the most daunting tasks is now Information Assurance (IA). What was once at most a secondary item consisting mainly of installing an Anti-Virus suite is now becoming one of the most important aspects of ATE. Given the current climate of IA it has become important to ensure ATE is kept safe from any breaches of security or loss of information. Even though most ATE are not on the Internet (or even on a network for many) they are still vulnerable to some of the same attack vectors plaguing common computers and other electronic devices. This paper will discuss some of the processes and procedures which must be used to ensure that modern ATE can continue to be used to test and detect faults in the systems they are designed to test. The common items that must be considered for ATE are as follows: The ATE system must have some form of Anti-Virus (as should all computers). The ATE system should have a minimum software footprint only providing the software needed to perform the task. The ATE system should be verified to have all the Operating System (OS) settings configured pursuant to the task it is intended to perform. The ATE OS settings should include password and password expiration settings to prevent access by anyone not expected to be on the system. The ATE system software should be written and constructed such that it in itself is not readily open to attack. The ATE system should be designed in a manner such that none of the instruments in the system can easily be attacked. The ATE system should insure any paths to the outside world (such as Ethernet or USB devices) are limited to only those required to perform the task it was designed for. These and many other common configuration concerns will be discussed in the paper.

2019-03-28
Varga, S., Brynielsson, J., Franke, U..  2018.  Information Requirements for National Level Cyber Situational Awareness. 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). :774-781.

As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.

2018-02-02
Rogers, R., Apeh, E., Richardson, C. J..  2016.  Resilience of the Internet of Things (IoT) from an Information Assurance (IA) perspective. 2016 10th International Conference on Software, Knowledge, Information Management Applications (SKIMA). :110–115.

Internet infrastructure developments and the rise of the IoT Socio-Technical Systems (STS) have frequently generated more unsecure protocols to facilitate the rapid intercommunication between the plethoras of IoT devices. Whereas, current development of the IoT has been mainly focused on enabling and effectively meeting the functionality requirement of digital-enabled enterprises we have seen scant regard to their IA architecture, marginalizing system resilience with blatant afterthoughts to cyber defence. Whilst interconnected IoT devices do facilitate and expand information sharing; they further increase of risk exposure and potential loss of trust to their Socio-Technical Systems. A change in the IoT paradigm is needed to enable a security-first mind-set; if the trusted sharing of information built upon dependable resilient growth of IoT is to be established and maintained. We argue that Information Assurance is paramount to the success of IoT, specifically its resilience and dependability to continue its safe support for our digital economy.

2017-12-12
Taylor, J. M., Sharif, H. R..  2017.  Security challenges and methods for protecting critical infrastructure cyber-physical systems. 2017 International Conference on Selected Topics in Mobile and Wireless Networking (MoWNeT). :1–6.

Cyber-Physical Systems (CPS) represent a fundamental link between information technology (IT) systems and the devices that control industrial production and maintain critical infrastructure services that support our modern world. Increasingly, the interconnections among CPS and IT systems have created exploitable security vulnerabilities due to a number of factors, including a legacy of weak information security applications on CPS and the tendency of CPS operators to prioritize operational availability at the expense of integrity and confidentiality. As a result, CPS are subject to a number of threats from cyber attackers and cyber-physical attackers, including denial of service and even attacks against the integrity of the data in the system. The effects of these attacks extend beyond mere loss of data or the inability to access information system services. Attacks against CPS can cause physical damage in the real world. This paper reviews the challenges of providing information assurance services for CPS that operate critical infrastructure systems and industrial control systems. These methods are thorough measures to close integrity and confidentiality gaps in CPS and processes to highlight the security risks that remain. This paper also outlines approaches to reduce the overhead and complexity for security methods, as well as examine novel approaches, including covert communications channels, to increase CPS security.

2017-09-05
Queiroz, Rodrigo, Berger, Thorsten, Czarnecki, Krzysztof.  2016.  Towards Predicting Feature Defects in Software Product Lines. Proceedings of the 7th International Workshop on Feature-Oriented Software Development. :58–62.

Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.

Beaumont, Mark, McCarthy, Jim, Murray, Toby.  2016.  The Cross Domain Desktop Compositor: Using Hardware-based Video Compositing for a Multi-level Secure User Interface. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :533–545.

We have developed the Cross Domain Desktop Compositor, a hardware-based multi-level secure user interface, suitable for deployment in high-assurance environments. Through composition of digital display data from multiple physically-isolated single-level secure domains, and judicious switching of keyboard and mouse input, we provide an integrated multi-domain desktop solution. The system developed enforces a strict information flow policy and requires no trusted software. To fulfil high-assurance requirements and achieve a low cost of accreditation, the architecture favours simplicity, using mainly commercial-off-the-shelf components complemented by small trustworthy hardware elements. The resulting user interface is intuitive and responsive and we show how it can be further leveraged to create integrated multi-level applications and support managed information flows for secure cross domain solutions. This is a new approach to the construction of multi-level secure user interfaces and multi-level applications which minimises the required trusted computing base, whilst maintaining much of the desired functionality.

Ben Dhief, Yosra, Djemaiel, Yacine, Rekhis, Slim, Boudriga, Noureddine.  2016.  A Novel Sensor Cloud Based SCADA Infrastructure for Monitoring and Attack Prevention. Proceedings of the 14th International Conference on Advances in Mobile Computing and Multi Media. :45–49.

The infrastructures of Supervisory Control and Data Acquisition (SCADA) systems have evolved through time in order to provide more efficient supervision services. Despite the changes made on SCADA architectures, several enhancements are still required to address the need for: a) large scale supervision using a high number of sensors, b) reduction of the reaction time when a malicious activity is detected; and c) the assurance of a high interoperability between SCADA systems in order to prevent the propagation of incidents. In this context, we propose a novel sensor cloud based SCADA infrastructure to monitor large scale and inter-dependant critical infrastructures, making an effective use of sensor clouds to increase the supervision coverage and the processing time. It ensures also the interoperability between interdependent SCADAs by offering a set of services to SCADA, which are created through the use of templates and are associated to set of virtual sensors. A simulation is conducted to demonstrate the effectiveness of the proposed architecture.

Huang, Haixing, Song, Jinghe, Lin, Xuelian, Ma, Shuai, Huai, Jinpeng.  2016.  TGraph: A Temporal Graph Data Management System. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2469–2472.

Temporal graphs are a class of graphs whose nodes and edges, together with the associated properties, continuously change over time. Recently, systems have been developed to support snapshot queries over temporal graphs. However, these systems barely support aggregate time range queries. Moreover, these systems cannot guarantee ACID transactions, an important feature for data management systems as long as concurrent processing is involved. To solve these issues, we design and develop TGraph, a temporal graph data management system, that assures the ACID transaction feature, and supports fast temporal graph queries.

Ghanim, Yasser.  2016.  Toward a Specialized Quality Management Maturity Assessment Model. Proceedings of the 2Nd Africa and Middle East Conference on Software Engineering. :1–8.

SW Quality Assessment models are either too broad such as CMMI-DEV and SPICE that cover the full software development life cycle (SDLC), or too narrow such as TMMI and TPI that focus on testing. Quality Management as a main concern within the software industry is broader than the concept of testing. The V-Model sets a broader view with the concepts of Verification and Validation. Quality Assurance (QA) is another broader term that includes quality of processes. Configuration audits add more scope. In parallel there are some less visible dimensions in quality not often addressed in traditional models such as business alignment of QA efforts. This paper compares the commonly accepted models related to software quality management and proposes a model that fills an empty space in this area. The paper provides some analysis of the concepts of maturity and capability levels and provides some proposed adaptations for quality management assessment.

Kumar, S. Dinesh, Thapliyal, Himanshu.  2016.  QUALPUF: A Novel Quasi-Adiabatic Logic Based Physical Unclonable Function. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :24:1–24:4.

In the recent years, silicon based Physical Unclonable Function (PUF) has evolved as one of the popular hardware security primitives. PUFs are a class of circuits that use the inherent variations in the Integrated Circuit (IC) manufacturing process to create unique and unclonable IDs. There are various security related applications of PUFs such as IC counterfeiting, piracy detection, secure key management etc. In this paper, we are presenting a novel QUasi-Adiabatic Logic based PUF (QUALPUF) which is designed using energy recovery technique. To the best of our knowledge, this is the first work on the hardware design of PUF using adiabatic logic. The proposed design is energy efficient compared to recent designs of hardware PUFs proposed in the literature. Further, we are proposing a novel bit extraction method for our proposed PUF which improves the space set of challenge-response pairs. QUALPUF is evaluated in security metrics including reliability, uniqueness, uniformity and bit-aliasing. Power and area of QUALPUF is also presented. SPICE simulations show that QUALPUF consumes 0.39μ Watt of power to generate a response bit.

2017-06-05
Fredericks, Erik M..  2016.  Automatically Hardening a Self-adaptive System Against Uncertainty. Proceedings of the 11th International Symposium on Software Engineering for Adaptive and Self-Managing Systems. :16–27.

A self-adaptive system (SAS) can reconfigure to adapt to potentially adverse conditions that can manifest in the environment at run time. However, the SAS may not have been explicitly developed with such conditions in mind, thereby requiring additional configuration states or updates to the requirements specification for the SAS to provide assurance that it continually satisfies its requirements and delivers acceptable behavior. By discovering both adverse environmental conditions and the SAS configuration states that can mitigate those conditions at design time, an SAS can be hardened against uncertainty prior to deployment, effectively extending its lifetime. This paper introduces two search-based techniques, Ragnarok and Valkyrie, for hardening an SAS against uncertainty. Ragnarok automatically discovers adverse conditions that negatively impact an SAS by searching for environmental conditions that explicitly cause requirements violations. Valkyrie then searches for SAS configurations that improve requirements satisficement throughout execution in response to discovered adverse environmental conditions. Together, these techniques can be used to improve the design and implementation of an SAS. We apply each technique to an industry-provided remote data mirroring application that can self-reconfigure in response to unknown or adverse conditions, such as network message delays, network link failures, and sensor noise.

2017-05-30
Vaughn, Jr., Rayford B., Morris, Tommy.  2016.  Addressing Critical Industrial Control System Cyber Security Concerns via High Fidelity Simulation. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :12:1–12:4.

This paper outlines a set of 10 cyber security concerns associated with Industrial Control Systems (ICS). The concerns address software and hardware development, implementation, and maintenance practices, supply chain assurance, the need for cyber forensics in ICS, a lack of awareness and training, and finally, a need for test beds which can be used to address the first 9 cited concerns. The concerns documented in this paper were developed based on the authors' combined experience conducting research in this field for the US Department of Homeland Security, the National Science Foundation, and the Department of Defense. The second half of this paper documents a virtual test bed platform which is offered as a tool to address the concerns listed in the first half of the paper. The paper discusses various types of test beds proposed in literature for ICS research, provides an overview of the virtual test bed platform developed by the authors, and lists future works required to extend the existing test beds to serve as a development platform.

2017-05-22
Dörre, Felix, Klebanov, Vladimir.  2016.  Practical Detection of Entropy Loss in Pseudo-Random Number Generators. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :678–689.

Pseudo-random number generators (PRNGs) are a critical infrastructure for cryptography and security of many computer applications. At the same time, PRNGs are surprisingly difficult to design, implement, and debug. This paper presents the first static analysis technique specifically for quality assurance of cryptographic PRNG implementations. The analysis targets a particular kind of implementation defect, the entropy loss. Entropy loss occurs when the entropy contained in the PRNG seed is not utilized to the full extent for generating the pseudo-random output stream. The Debian OpenSSL disaster, probably the most prominent PRNG-related security incident, was one but not the only manifestation of such a defect. Together with the static analysis technique, we present its implementation, a tool named Entroposcope. The tool offers a high degree of automation and practicality. We have applied the tool to five real-world PRNGs of different designs and show that it effectively detects both known and previously unknown instances of entropy loss.

2017-05-19
Rheinländer, Astrid, Lehmann, Mario, Kunkel, Anja, Meier, Jörg, Leser, Ulf.  2016.  Potential and Pitfalls of Domain-Specific Information Extraction at Web Scale. Proceedings of the 2016 International Conference on Management of Data. :759–771.

In many domains, a plethora of textual information is available on the web as news reports, blog posts, community portals, etc. Information extraction (IE) is the default technique to turn unstructured text into structured fact databases, but systematically applying IE techniques to web input requires highly complex systems, starting from focused crawlers over quality assurance methods to cope with the HTML input to long pipelines of natural language processing and IE algorithms. Although a number of tools for each of these steps exists, their seamless, flexible, and scalable combination into a web scale end-to-end text analytics system still is a true challenge. In this paper, we report our experiences from building such a system for comparing the "web view" on health related topics with that derived from a controlled scientific corpus, i.e., Medline. The system combines a focused crawler, applying shallow text analysis and classification to maintain focus, with a sophisticated text analytic engine inside the Big Data processing system Stratosphere. We describe a practical approach to seed generation which led us crawl a corpus of \textasciitilde1 TB web pages highly enriched for the biomedical domain. Pages were run through a complex pipeline of best-of-breed tools for a multitude of necessary tasks, such as HTML repair, boilerplate detection, sentence detection, linguistic annotation, parsing, and eventually named entity recognition for several types of entities. Results are compared with those from running the same pipeline (without the web-related tasks) on a corpus of 24 million scientific abstracts and a third corpus made of \textasciitilde250K scientific full texts. We evaluate scalability, quality, and robustness of the employed methods and tools. The focus of this paper is to provide a large, real-life use case to inspire future research into robust, easy-to-use, and scalable methods for domain-specific IE at web scale.