Biblio
How can high-level directives concerning risk, cybersecurity and compliance be operationalized in the central nervous system of any organization above a certain complexity? How can the effectiveness of technological solutions for security be proven and measured, and how can this technology be aligned with the governance and financial goals at the board level? These are the essential questions for any CEO, CIO or CISO that is concerned with the wellbeing of the firm. The concept of Zero Trust (ZT) approaches information and cybersecurity from the perspective of the asset to be protected, and from the value that asset represents. Zero Trust has been around for quite some time. Most professionals associate Zero Trust with a particular architectural approach to cybersecurity, involving concepts such as segments, resources that are accessed in a secure manner and the maxim “always verify never trust”. This paper describes the current state of the art in Zero Trust usage. We investigate the limitations of current approaches and how these are addressed in the form of Critical Success Factors in the Zero Trust Framework developed by ON2IT ‘Zero Trust Innovators’ (1). Furthermore, this paper describes the design and engineering of a Zero Trust artefact that addresses the problems at hand (2), according to Design Science Research (DSR). The last part of this paper outlines the setup of an empirical validation trough practitioner oriented research, in order to gain a broader acceptance and implementation of Zero Trust strategies (3). The final result is a proposed framework and associated technology which, via Zero Trust principles, addresses multiple layers of the organization to grasp and align cybersecurity risks and understand the readiness and fitness of the organization and its measures to counter cybersecurity risks.
In cyberspace, a digital signature is a mathematical technique that plays a significant role, especially in validating the authenticity of digital messages, emails, or documents. Furthermore, the digital signature mechanism allows the recipient to trust the authenticity of the received message that is coming from the said sender and that the message was not altered in transit. Moreover, a digital signature provides a solution to the problems of tampering and impersonation in digital communications. In a real-life example, it is equivalent to a handwritten signature or stamp seal, but it offers more security. This paper proposes a scheme to enable users to digitally sign their communications by validating their identity through users’ mobile devices. This is done by utilizing the user’s ambient Wi-Fi-enabled devices. Moreover, the proposed scheme depends on something that a user possesses (i.e., Wi-Fi-enabled devices), and something that is in the user’s environment (i.e., ambient Wi-Fi access points) where the validation process is implemented, in a way that requires no effort from users and removes the "weak link" from the validation process. The proposed scheme was experimentally examined.
In light of recent advances in genetic-algorithm-driven automated program modification, our team has been actively exploring the art, engineering, and discovery of novel semantics-preserving transforms. While modern compilers represent some of the best ideas we have for automated program modification, current approaches represent only a small subset of the types of transforms which can be achieved. In the wilderness of post-apocalyptic software ecosystems of genetically-modified and mutant programs, there exist a broad array of potentially useful software mutations, including semantics-preserving transforms that may play an important role in future software design, development, and most importantly, evolution.
The premise of this year's SafeConfig Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners.
The premise of the SafeConfig'16 Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners. The workshop will convene a panel of experts to explore this concept. The topic will be discussed from three different perspectives. One perspective is that of the practitioner. We will explore whether active and resilient technologies are or are planned for deployment and whether the verification methodology affects that decision. The second perspective will be that of the research community. We will address the shortcomings of current approaches and the research directions needed to address the practitioner's concerns. The third perspective is that of the policy community. Specifically, we will explore the dynamics between technology, verification, and policy.
This paper presents verification and model based checking of the Trivial File Transfer Protocol (TFTP). Model checking is a technique for software verification that can detect concurrency defects within appropriate constraints by performing an exhaustive state space search on a software design or implementation and alert the implementing organization to potential design deficiencies that are otherwise difficult to be discovered. The TFTP is implemented on top of the Internet User Datagram Protocol (UDP) or any other datagram protocol. We aim to create a design model of TFTP protocol, with adding window size, using Promela to simulate it and validate some specified properties using spin. The verification has been done by using the model based checking tool SPIN which accepts design specification written in the verification language PROMELA. The results show that TFTP is free of live locks.
With the advent of World Wide Web, information sharing through internet increased drastically. So web applications security is today's most significant battlefield between attackers and resources of web service. It is likely to remain so for the foreseeable future. By considering recent attacks it has been found that major attacks in Web Applications have been carried out even when system having most significant network level security. Poor input validation mechanisms that using in Web Applications shall causes to launching vulnerable web applications, which easy to exploit easy in future stages. Critical Web Application Vulnerabilities like Cross Site Scripting (XSS) and Injections (SQL, PHP, LDAP, SSL, XML, Command, and Code) are happen because of base level Validations, and it is enough to update system in unauthorized way or may be causes to exploit the system. In this paper we present those issues in data validations strategies, to avoid deployment of vulnerable web applications.