Biblio
Zero Trust Model ensures each node is responsible for the approval of the transaction before it gets committed. The data owners can track their data while it’s shared amongst the various data custodians ensuring data security. The consensus algorithm enables the users to trust the network as malicious nodes fail to get approval from all nodes, thereby causing the transaction to be aborted. The use case chosen to demonstrate the proposed consensus algorithm is the college placement system. The algorithm has been extended to implement a diversified, decentralized, automated placement system, wherein the data owner i.e. the student, maintains an immutable certificate vault and the student’s data has been validated by a verifier network i.e. the academic department and placement department. The data transfer from student to companies is recorded as transactions in the distributed ledger or blockchain allowing the data to be tracked by the student.
Research on the design of data center infrastructure is increasing, both from academia and industry, due to the rapid development of cloud-based applications such as search engines, social networks, and large-scale computing. On a large scale, data centers can consist of hundreds to thousands of servers that require systems with high-performance requirements and low downtime. To meet the network's needs in a dynamic data center, infrastructure of applications and services are growing. It takes a process of designing a network topology so that it can guarantee availability and security. One way to surmount this is by implementing the zero trust security model based on micro-segmentation. Zero trust is a security idea based on the principle of "never trust, always verify" in which no concepts of trust and untrust in network traffic. The zero trust security model implemented network traffic in the form of untrust. Micro-segmentation is a way to achieve zero trust by dividing a network into smaller logical segments to restrict the traffic. In this research, data center network performance based on software-defined networking with zero trust security model using micro-segmentation has been evaluated using a testbed simulation of Cisco Application Centric Infrastructure by measuring the round trip time, jitter, and packet loss during experiments. Performance evaluation results show that micro-segmentation adds an average round trip time of 4 μs and jitter of 11 μs without packet loss so that the security can be improved without significantly affecting network performance on the data center.
LBSs are Location-Based Services that provide certain service based on the current or past user's location. During the past decade, LBSs have become more popular as a result of the widespread use of mobile devices with position functions. Location information is a secondary information that can provide personal insight about one's life. This issue associated with sharing of data in cloud-based locations. For example, a hospital is a public space and the actual location of the hospital does not carry any sensitive information. However, it may become sensitive if the specialty of the hospital is analyzed. In this paper we proposed design presents a combination of methods for providing data privacy protection for location-based services (LBSs) with the use of cloud service. The work built in zero trust and we start to manage the access to the system through different levels. The proposal is based on a model that stores user location data in supplementary servers and not in non-trustable third-party applications. The approach of the present research is to analyze the privacy protection possibilities through data partitioning. The data collected from the different recourses are distributed into different servers according to the partitioning model based on multi-level policy. Access is granted to third party applications only to designated servers and the privacy of the user profile is also ensured in each server, as they are not trustable.
In this paper, we aim at the automated unit coverage-based testing for embedded software. To achieve the goal, by analyzing the industrial requirements and our previous work on automated unit testing tool CAUT, we rebuild a new tool, SmartUnit, to solve the engineering requirements that take place in our partner companies. SmartUnit is a dynamic symbolic execution implementation, which supports statement, branch, boundary value and MC/DC coverage. SmartUnit has been used to test more than one million lines of code in real projects. For confidentiality motives, we select three in-house real projects for the empirical evaluations. We also carry out our evaluations on two open source database projects, SQLite and PostgreSQL, to test the scalability of our tool since the scale of the embedded software project is mostly not large, 5K-50K lines of code on average. From our experimental results, in general, more than 90% of functions in commercial embedded software achieve 100% statement, branch, MC/DC coverage, more than 80% of functions in SQLite achieve 100% MC/DC coverage, and more than 60% of functions in PostgreSQL achieve 100% MC/DC coverage. Moreover, SmartUnit is able to find the runtime exceptions at the unit testing level. We also have reported exceptions like array index out of bounds and divided-by-zero in SQLite. Furthermore, we analyze the reasons of low coverage in automated unit testing in our setting and give a survey on the situation of manual unit testing with respect to automated unit testing in industry.
Modern personal computers have embraced increasingly powerful Graphics Processing Units (GPUs). Recently, GPU-based graphics acceleration in web apps (i.e., applications running inside a web browser) has become popular. WebGL is the main effort to provide OpenGL-like graphics for web apps and it is currently used in 53% of the top-100 websites. Unfortunately, WebGL has posed serious security concerns as several attack vectors have been demonstrated through WebGL. Web browsers\guillemotright solutions to these attacks have been reactive: discovered vulnerabilities have been patched and new runtime security checks have been added. Unfortunately, this approach leaves the system vulnerable to zero-day vulnerability exploits, especially given the large size of the Trusted Computing Base of the graphics plane. We present Sugar, a novel operating system solution that enhances the security of GPU acceleration for web apps by design. The key idea behind Sugar is using a dedicated virtual graphics plane for a web app by leveraging modern GPU virtualization solutions. A virtual graphics plane consists of a dedicated virtual GPU (or vGPU) as well as all the software graphics stack (including the device driver). Sugar enhances the system security since a virtual graphics plane is fully isolated from the rest of the system. Despite GPU virtualization overhead, we show that Sugar achieves high performance. Moreover, unlike current systems, Sugar is able to use two underlying physical GPUs, when available, to co-render the User Interface (UI): one GPU is used to provide virtual graphics planes for web apps and the other to provide the primary graphics plane for the rest of the system. Such a design not only provides strong security guarantees, it also provides enhanced performance isolation.
Query authentication has been extensively studied to ensure the integrity of query results for outsourced databases, which are often not fully trusted. However, access control, another important security concern, is largely ignored by existing works. Notably, recent breakthroughs in cryptography have enabled fine-grained access control over outsourced data. In this paper, we take the first step toward studying the problem of authenticating relational queries with fine-grained access control. The key challenge is how to protect information confidentiality during query authentication, which is essential to many critical applications. To address this challenge, we propose a novel access-policy-preserving (APP) signature as the primitive authenticated data structure. A useful property of the APP signature is that it can be used to derive customized signatures for unauthorized users to prove the inaccessibility while achieving the zero-knowledge confidentiality. We also propose a grid-index-based tree structure that can aggregate APP signatures for efficient range and join query authentication. In addition to this, a number of optimization techniques are proposed to further improve the authentication performance. Security analysis and performance evaluation show that the proposed solutions and techniques are robust and efficient under various system settings.
Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it. To this day, zk-SNARKs are being used for delegating computation, electronic cryptocurrencies, and anonymous credentials. However, all current SNARKs implementations rely on pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades. In this work, we introduce the first designated-verifier zk-SNARK based on lattice assumptions, which are believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt'13) to the SNARK of Danezis et al. (Asiacrypt'14) that is based on Square Span Programs (SSPs) and relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters as well as extensive benchmarks on a C implementation, showing that our construction is practically instantiable.
Nuclear weapons have re-emerged as one the main global security challenges of our time. Any further reductions in the nuclear arsenals will have to rely on robust verification mechanisms. This requires, in particular, trusted measurement systems to confirm the authenticity of nuclear warheads based on their radiation signatures. These signatures are considered extremely sensitive information, and inspection systems have to be designed to protect them. To accomplish this task, so-called information barriers" have been proposed. These devices process sensitive information acquired during an inspection, but only display results in a pass/fail manner. Traditional inspection systems rely on complex electronics both for data acquisition and processing. Several research efforts have produced prototype systems, but after almost thirty years of research and development, no viable and widely accepted system has emerged. This talk highlights recent efforts to overcome this impasse. A first approach is to avoid electronics in critical parts of the measurement process altogether and to rely instead on physical phenomena to detect radiation and to confirm a unique fingerprint of the inspected warhead using a zero-knowledge protocol. A second approach is based on a radiation detection system using vintage electronics built around a 6502 processor. Hardware designed in the distant past, at a time when its use for sensitive measurements was never envisioned, may drastically reduce concerns that another party implemented backdoors or hidden switches. Sensitive information is only stored on traditional punched cards. The talk concludes with a roadmap and highlights opportunities for researchers from the hardware security community to make critical contributions to nuclear arms control and global security in the years ahead.
With the rapid development of big data technology, the requirement of data processing capacity and efficiency result in failure of a number of legacy security technologies, especially in the data security domain. Data security risks became extremely important for big data usage. We introduced a novel method to preform big data security control, which comprises three steps, namely, user context recognition based on zero trust, fine-grained data access authentication control, and data access audit based on full network traffic to recognize and intercept risky data access in big data environment. Experiments conducted on the fine-grained big data security method based on the zero trust model of drug-related information analysis system demonstrated that this method can identify the majority of data security risks.
Internet of Things (IoT) is experiencing exponential scalability. This scalability introduces new challenges regarding management of IoT networks. The question that emerges is how we can trust the constrained infrastructure that shortly is expected to be formed by millions of 'things.' The answer is not to trust. This research introduces Amatista, a blockchain-based middleware for management in IoT. Amatista presents a novel zero-trust hierarchical mining process that allows validating the infrastructure and transactions at different levels of trust. This research evaluates Amatista on Edison Arduino Boards.
The evolution of the enterprise computing landscape towards emerging trends such as fog/edge computing and the Industrial Internet of Things (IIoT) are leading to a change of approach to securing computer networks to deal with challenges such as mobility, virtualized infrastructures, dynamic and heterogeneous user contexts and transaction-based interactions. The uncertainty introduced by such dynamicity introduces greater uncertainty into the access control process and motivates the need for risk-based access control decision making. Thus, the traditional perimeter-based security paradigm is increasingly being abandoned in favour of a so called "zero trust networking" (ZTN). In ZTN networks are partitioned into zones with different levels of trust required to access the zone resources depending on the assets protected by the zone. All accesses to sensitive information is subject to rigorous access control based on user and device profile and context. In this paper we outline a policy enforcement framework to address many of open challenges for risk-based access control for ZTN. We specify the design of required policy languages including a generic firewall policy language to express firewall rules. We design a mechanism to map these rules to specific firewall syntax and to install the rules on the firewall. We show the viability of our design with a small proof-of-concept.
Recently, attribute-based access control (ABAC) has emerged as a convenient paradigm for specifying, enforcing and maintaining rich and flexible authorization policies, leveraging attributes originated from multiple sources, e.g., operative systems, software modules, remote services, etc. However, attackers may try to bypass ABAC policies by compromising such sources to forge the attributes they provide, e.g., by deliberately manipulating the data contained within those attributes at will, in an effort to gain unintended access to sensitive resources as a result. In such a context, performing a proper risk assessment of ABAC policies, taking into account their enlisted attributes as well as their corresponding sources, becomes highly convenient to overcome zero-day security incidents or vulnerabilities, before they can be later exploited by attackers. With this in mind, we introduce RiskPol, an automated risk assessment framework for ABAC policies based on dynamically combining previously-assigned trust scores for each attribute source, such that overall scores at the policy level can be later obtained and used as a reference for performing a risk assessment on each policy. In this paper, we detail the general intuition behind our approach, its current status, as well as our plans for future work.