Biblio
WireGuard is a free and open source Virtual Private Network (VPN) that aims to replace IPsec and OpenVPN. It is based on a new cryptographic protocol derived from the Noise Protocol Framework. This paper presents the first mechanised cryptographic proof of the protocol underlying WireGuard, using the CryptoVerif proof assistant. We analyse the entire WireGuard protocol as it is, including transport data messages, in an ACCE-style model. We contribute proofs for correctness, message secrecy, forward secrecy, mutual authentication, session uniqueness, and resistance against key compromise impersonation, identity mis-binding, and replay attacks. We also discuss the strength of the identity hiding provided by WireGuard. Our work also provides novel theoretical contributions that are reusable beyond WireGuard. First, we extend CryptoVerif to account for the absence of public key validation in popular Diffie-Hellman groups like Curve25519, which is used in many modern protocols including WireGuard. To our knowledge, this is the first mechanised cryptographic proof for any protocol employing such a precise model. Second, we prove several indifferentiability lemmas that are useful to simplify the proofs for sequences of key derivations.
Denial of Service (DoS) attacks have been a serious security concern, as no service is, in principle, protected against them. Although a Dolev-Yao intruder with unlimited resources can trivially render any service unavailable, DoS attacks do not necessarily have to be carried out by such (extremely) powerful intruders. It is useful in practice and more challenging for formal protocol verification to determine whether a service is vulnerable even to resource-bounded intruders that cannot generate or intercept arbitrary large volumes of traffic. This paper proposes a novel, more refined intruder model where the intruder can only consume at most some specified amount of resources in any given time window. Additionally, we propose protocol theories that may contain timeouts and specify service resource usage during protocol execution. In contrast to the existing resource-conscious protocol verification models, our model allows finer and more subtle analysis of DoS problems. We illustrate the power of our approach by representing a number of classes of DoS attacks, such as, Slow, Asymmetric and Amplification DoS attacks, exhausting different types of resources of the target, such as, number of workers, processing power, memory, and network bandwidth. We show that the proposed DoS problem is undecidable in general and is PSPACE-complete for the class of resource-bounded, balanced systems. Finally, we implemented our formal verification model in the rewriting logic tool Maude and analyzed a number of DoS attacks in Maude using Rewriting Modulo SMT in an automated fashion.
This paper presents a case study on the use and implementation of the Qualified Digital Signature. Problematics such as the degree of use, security and authenticity of Qualified Digital Signature and the publication and dissemination of documents signed in digital format are analyzed. In order to support the case study, a methodology was adopted that included interviews with municipalities that are part of the Intermunicipal Community of the region of Leiria and a computer application was developed that allowed to analyze the documents available in the institutional websites of the municipalities, the ones that were digitally signed. The results show that institutional websites are already providing documentation with Qualified Digital Signature and that the level of trust and authenticity regarding their use is considered to be mostly very positive.
Blockchains are emerging technologies that propose new business models and value propositions. Besides their application for cryptocurrency purposes, as distributed ledgers of transactions, they enable new ways to provision trusted information in a distributed fashion. In this paper, we present our product tagging solution designed to help Small & Medium Enterprises (SMEs) protect their brands against counterfeit products and parallel markets, as well as to enhance UX (User Experience) and promote the brand and product.Our solution combines the use of DLT to assure, in a verifiable and permanent way, the trustworthiness and confidentiality of the information associated to the goods and the innovative CP-ABE encryption technique to differentiate accessibility to the product's information.
The roll-out of smart meters (SMs) in the electric grid has enabled data-driven grid management and planning techniques. SM data can be used together with short-term load forecasts (STLFs) to overcome polling frequency constraints for better grid management. However, the use of SMs that report consumption data at high spatial and temporal resolutions entails consumer privacy risks, motivating work in protecting consumer privacy. The impact of privacy protection schemes on STLF accuracy is not well studied, especially for smaller aggregations of consumers, whose load profiles are subject to more volatility and are, thus, harder to predict. In this paper, we analyse the impact of two user demand shaping privacy protection schemes, model-distribution predictive control (MDPC) and load-levelling, on STLF accuracy. Support vector regression is used to predict the load profiles at different consumer aggregation levels. Results indicate that, while the MDPC algorithm marginally affects forecast accuracy for smaller consumer aggregations, this diminishes at higher aggregation levels. More importantly, the load-levelling scheme significantly improves STLF accuracy as it smoothens out the grid visible consumer load profile.
Since cyber-physical systems are inherently vulnerable to information leaks, software architects need to reason about security policies to define desired and undesired information flow through a system. The microservice architectural style requires the architects to refine a macro-level security policy into micro-level policies for individual microservices. However, when policies are refined in an ill-formed way, information leaks can emerge on composition of microservices. Related approaches to prevent such leaks do not take into account characteristics of cyber-physical systems like real-time behavior or message passing communication. In this paper, we enable the refinement and verification of information-flow security policies for cyber-physical microservice architectures. We provide architects with a set of well-formedness rules for refining a macro-level policy in a way that enforces its security restrictions. Based on the resulting micro-level policies, we present a verification technique to check if the real-time message passing of microservices is secure. In combination, our contributions prevent information leaks from emerging on composition. We evaluate the accuracy of our approach using an extension of the CoCoME case study.
The Internet of Things (IoT) market is growing rapidly, allowing continuous evolution of new technologies. Alongside this development, most IoT devices are easy to compromise, as security is often not a prioritized characteristic. This paper proposes a novel IoT Security Model (IoTSM) that can be used by organizations to formulate and implement a strategy for developing end-to-end IoT security. IoTSM is grounded by the Software Assurance Maturity Model (SAMM) framework, however it expands it with new security practices and empirical data gathered from IoT practitioners. Moreover, we generalize the model into a conceptual framework. This approach allows the formal analysis for security in general and evaluates an organization's security practices. Overall, our proposed approach can help researchers, practitioners, and IoT organizations, to discourse about IoT security from an end-to-end perspective.
Many countries around the world have realized the benefits of the e-government platform in peoples' daily life, and accordingly have already made partial implementations of the key e-government processes. However, before full implementation of all potential services can be made, governments demand the deployment of effective information security measures to ensure secrecy and privacy of their citizens. In this paper, a robust watermarking algorithm is proposed to provide copyright protection for e-government document images. The proposed algorithm utilizes two transforms: the Discrete Wavelet Transformation (DWT) and the Singular Value Decomposition (SVD). Experimental results demonstrate that the proposed e-government document images watermarking algorithm performs considerably well compared to existing relevant algorithms.
Currently, organisations find it difficult to design a Decision Support System (DSS) that can predict various operational risks, such as financial and quality issues, with operational risks responsible for significant economic losses and damage to an organisation's reputation in the market. This paper proposes a new DSS for risk assessment, called the Fuzzy Inference DSS (FIDSS) mechanism, which uses fuzzy inference methods based on an organisation's big data collection. It includes the Emerging Association Patterns (EAP) technique that identifies the important features of each risk event. Then, the Mamdani fuzzy inference technique and several membership functions are evaluated using the firm's data sources. The FIDSS mechanism can enhance an organisation's decision-making processes by quantifying the severity of a risk as low, medium or high. When it automatically predicts a medium or high level, it assists organisations in taking further actions that reduce this severity level.
Most searchable attribute-based encryption schemes only support the search for single-keyword without attribute revocation, the data user cannot quickly detect the validity of the ciphertext returned by the cloud service provider. Therefore, this paper proposes an authorization of searchable CP-ABE scheme with attribute revocation and applies the scheme to the cloud computing environment. The data user to send the authorization information to the authorization server for authorization, assists the data user to effectively detect the ciphertext information returned by the cloud service provider while supporting the revocation of the user attribute in a fine-grained access control structure without updating the key during revocation stage. In the random oracle model based on the calculation of Diffie-Hellman problem, it is proved that the scheme can satisfy the indistinguishability of ciphertext and search trapdoor. Finally, the performance analysis shows that the scheme has higher computational efficiency.
In this paper, we present a combinatorial testing methodology for testing web applications in regards to SQL injection vulnerabilities. We describe three attack grammars that were developed and used to generate concrete attack vectors. Furthermore, we present and evaluate two different oracles used to observe the application's behavior when subjected to such attack vectors. We also present a prototype tool called SQLInjector capable of automated SQL injection vulnerability testing for web applications. The developed methodology can be applied to any web application that uses server side scripting and HTML for handling user input and has a SQL database backend. Our approach relies on the use of a database proxy, making this a gray-box testing method. We establish the effectiveness of the proposed tool with the WAVSEP verification framework and conduct a case study on real-world web applications, where we are able to discover both known vulnerabilities and additional previously undiscovered flaws.
The software development life cycle (SDLC) starts with business and functional specifications signed with a client. In addition to this, the specifications also capture policy / procedure / contractual / regulatory / legislation / standard compliances with respect to a given client industry. The SDLC must adhere to service level agreements (SLAs) while being compliant to development activities, processes, tools, frameworks, and reuse of open-source software components. In today's world, global software development happens across geographically distributed (autonomous) teams consuming extraordinary amounts of open source components drawn from a variety of disparate sources. Although this is helping organizations deal with technical and economic challenges, it is also increasing unintended risks, e.g., use of a non-complaint license software might lead to copyright issues and litigations, use of a library with vulnerabilities pose security risks etc. Mitigation of such risks and remedial measures is a challenge due to lack of visibility and transparency of activities across these distributed teams as they mostly operate in silos. We believe a unified model that non-invasively monitors and analyzes the activities of distributed teams will help a long way in building software that adhere to various compliances. In this paper, we propose a decentralized CAG - Compliance Adherence and Governance framework using blockchain technologies. Our framework (i) enables the capturing of required data points based on compliance specifications, (ii) analyzes the events for non-conformant behavior through smart contracts, (iii) provides real-time alerts, and (iv) records and maintains an immutable audit trail of various activities.
Mobile military networks are uniquely challenging to build and maintain, because of their wireless nature and the unfriendliness of the environment, resulting in unreliable and capacity limited performance. Currently, most tactical networks implement TCP/IP, which was designed for fairly stable, infrastructure-based environments, and requires sophisticated and often application-specific extensions to address the challenges of the communication scenario. Information Centric Networking (ICN) is a clean slate networking approach that does not depend on stable connections to retrieve information and naturally provides support for node mobility and delay/disruption tolerant communications - as a result it is particularly interesting for tactical applications. However, despite ICN seems to offer some structural benefits for tactical environments over TCP/IP, a number of challenges including naming, security, performance tuning, etc., still need to be addressed for practical adoption. This document, prepared within NATO IST-161 RTG, evaluates the effectiveness of Named Data Networking (NDN), the de facto standard implementation of ICN, in the context of tactical edge networks and its potential for adoption.
Trusted collaboration satisfying the requirements of (a) adequate transparency and (b) preservation of privacy of business sensitive information is a key factor to ensure the success and adoption of online business-to-business (B2B) collaboration platforms. Our work proposes novel ways of stringing together game theoretic modeling, blockchain technology, and cryptographic techniques to build such a platform for B2B collaboration involving enterprise buyers and sellers who may be strategic. The B2B platform builds upon three ideas. The first is to use a permissioned blockchain with smart contracts as the technical infrastructure for building the platform. Second, the above smart contracts implement deep business logic which is derived using a rigorous analysis of a repeated game model of the strategic interactions between buyers and sellers to devise strategies to induce honest behavior from buyers and sellers. Third, we present a formal framework that captures the essential requirements for secure and private B2B collaboration, and, in this direction, we develop cryptographic regulation protocols that, in conjunction with the blockchain, help implement such a framework. We believe our work is an important first step in the direction of building a platform that enables B2B collaboration among strategic and competitive agents while maximizing social welfare and addressing the privacy concerns of the agents.
Growing amounts of research on IoT and its implications for security, privacy, economy and society has been carried out to inform policies and design. However, ordinary people who are citizens and users of these emerging technologies have rarely been involved in the processes that inform these policies, governance mechanisms and design due to the institutionalised processes that prioritise objective knowledge over subjective ones. People's subjective experiences are often discarded. This priority is likely to further widen the gap between people, technology policies and design as technologies advance towards delegated human agencies, which decreases human interfaces in technology-mediated relationships with objects, systems, services, trade and other (often) unknown third-party beneficiaries. Such a disconnection can have serious implications for policy implementation, especially when it involves human limitations. To address this disconnection, we argue that a space for people to meaningfully contribute their subjective knowledge — experience- to complex technology policies that, in turn, shape their experience and well-being needs to be constructed. To this end, our paper contributes the design and pilot implementation of a method to reconnect and involve people in IoT security policymaking and development.
Java is a safe programming language by providing bytecode verification and enforcing memory protection. For instance, programmers cannot directly access the memory but have to use object references. Yet, the Java runtime provides an Unsafe API as a backdoor for the developers to access the low- level system code. Whereas the Unsafe API is designed to be used by the Java core library, a growing community of third-party libraries use it to achieve high performance. The Unsafe API is powerful, but dangerous, which leads to data corruption, resource leaks and difficult-to-diagnose JVM crash if used improperly. In this work, we study the Unsafe crash patterns and propose a memory checker to enforce memory safety, thus avoiding the JVM crash caused by the misuse of the Unsafe API at the bytecode level. We evaluate our technique on real crash cases from the openJDK bug system and real-world applications from AJDK. Our tool reduces the efforts from several days to a few minutes for the developers to diagnose the Unsafe related crashes. We also evaluate the runtime overhead of our tool on projects using intensive Unsafe operations, and the result shows that our tool causes a negligible perturbation to the execution of the applications.
Much recent work focuses on finding bugs and security vulnerabilities in smart contracts written in existing languages. Although this approach may be helpful, it does not address flaws in the underlying programming language, which can facilitate writing buggy code in the first place. We advocate a re-thinking of the blockchain software engineering tool set, starting with the programming language in which smart contracts are written. In this paper, we propose and justify requirements for a new generation of blockchain software development tools. New tools should (1) consider users' needs as a primary concern; (2) seek to facilitate safe development by detecting relevant classes of serious bugs at compile time; (3) as much as possible, be blockchain-agnostic, given the wide variety of different blockchain platforms available, and leverage the properties that are common among blockchain environments to improve safety and developer effectiveness.
Information, not just data, is key to today's global challenges. To solve these challenges requires not only advancing geospatial and big data analytics but requires new analysis and decision-making environments that enable reliable decisions from trustable, understandable information that go beyond current approaches to machine learning and artificial intelligence. These environments are successful when they effectively couple human decision making with advanced, guided spatial analytics in human-computer collaborative discourse and decision making (HCCD). Our HCCD approach builds upon visual analytics, natural scale templates, traceable information, human-guided analytics, and explainable and interactive machine learning, focusing on empowering the decisionmaker through interactive visual spatial analytic environments where non-digital human expertise and experience can be combined with state-of-the-art and transparent analytical techniques. When we combine this approach with real-world application-driven research, not only does the pace of scientific innovation accelerate, but impactful change occurs. I'll describe how we have applied these techniques to challenges in sustainability, security, resiliency, public safety, and disaster management.
This paper presents a machine learning classifier designed to identify SQL injection vulnerabilities in PHP code. Both classical and deep learning based machine learning algorithms were used to train and evaluate classifier models using input validation and sanitization features extracted from source code files. On ten-fold cross validations a model trained using Convolutional Neural Network(CNN) achieved the highest precision (95.4%), while a model based on Multilayer Perceptron(MLP) achieved the highest recall (63.7%) and the highest f-measure (0.746).
The increasing diffusion of malware endowed with steganographic techniques requires to carefully identify and evaluate a new set of threats. The creation of a covert channel to hide a communication within network traffic is one of the most relevant, as it can be used to exfiltrate information or orchestrate attacks. Even if network steganography is becoming a well-studied topic, only few works focus on IPv6 and consider real network scenarios. Therefore, this paper investigates IPv6 covert channels deployed in the wild. Also, it presents a performance evaluation of six different data hiding techniques for IPv6 including their ability to bypass some intrusion detection systems. Lastly, ideas to detect IPv6 covert channels are presented.
Platoon is one of cooperative driving applications where a set of vehicles can collaboratively sense each other for driving safety and traffic efficiency. However, platoon without security insurance makes the cooperative vehicles vulnerable to cyber-attacks, which may cause life-threatening accidents. In this paper, we introduce malicious attacks in platoon maneuvers. To defend against these attacks, we propose a Cyphertext-Policy Attribute-Based Encryption (CP-ABE) based Platoon Secure Sensing scheme, named CPSS. In the CPSS, platoon key is encapsulated in the access control structure in the key distribution process, so that interference messages sending by attackers without the platoon key could be ignored. Therefore, the sensing data which contains speed and position information can be protected. In this way, speed and distance fluctuations caused by attacks can be mitigated even eliminated thereby avoiding the collisions and ensuring the overall platoon stability. Time complexity analysis shows that the CPSS is more efficient than that of the polynomial time solutions. Finally, to evaluate capabilities of the CPSS, we integrate a LTE-V2X with platoon maneuvers based on Veins platform. The evaluation results show that the CPSS outperforms the baseline algorithm by 25% in terms of distance variations.