Biblio
E-Governance is rising rapidly in various parts of the world. And with rising digitization of the resources, the threats to the infrastructure and digital data is also rising within the government departments. For developed nations, the security parameters and optimization process is well placed but for developing nations like India, the security parameter is yet to be addressed strongly. This study proposes a framework for security assessment amongst E-Governance departments based on Information System principles. The major areas of security to be covered up are towards Hardware, Network, Software, Server, & Data security, Physical Environment Security, and various policies for security of Information Systems at organizational level.
Authorization policy authoring has required tools from the start. With access policy governance now an executive-level responsibility, it is imperative that such a tool expose the policy to business users' with little or no IT intervention-as natural language. NIST SP 800-162 [1] first prescribes natural language policies (NLPs) as the preferred expression of policy and then implicitly calls for automated translation of NLP to machine-executable code. This paper therefore proposes an interoperable model for the NLP's human expression. It furthermore documents the research and development of a tool set for end-to-end authoring and translation. This R&D journey-focusing constantly on end users' has debunked certain myths, has responded to steadily increasing market sophistication, has applied formal disciplines (e.g. ontologies, grammars and compiler design) and has motivated an informal demonstration of autonomic code generation. The lessons learned should be of practical value to the entire ABAC community. The research in progress' increasingly complex policies, proactive rule analytics, and expanded NLP authoring language support will require collaboration with an ever-expanding technical community from industry and academia.
This paper presents the preliminary framework proposed by the authors for drivers of Smart Governance. The research question of this study is: What are the drivers for Smart Governance to achieve evidence-based policy-making? The framework suggests that in order to create a smart governance model, data governance and collaborative governance are the main drivers. These pillars are supported by legal framework, normative factors, principles and values, methods, data assets or human resources, and IT infrastructure. These aspects will guide a real time evaluation process in all levels of the policy cycle, towards to the implementation of evidence-based policies.
We address the problem of ciphertext-policy attribute-based encryption with fine access control, a cryptographic primitive which has many concrete application scenarios such as Pay-TV, e-Health, Cloud Storage and so on. In this context we improve on previous LSSS based techniques by building on previous work of Hohenberger and Waters at PKC'13 and proposing a construction that achieves ciphertext size linear in the minimum between the size of the boolean access formula and the number of its clauses. Our construction also supports fast decryption. We also propose two interesting extensions: the first one aims at reducing storage and computation at the user side and is useful in the context of lightweight devices or devices using a cloud operator. The second proposes the use of multiple authorities to mitigate key escrow by the authority.
The cloud computing paradigm enables enterprises to realise significant cost savings whilst boosting their agility and productivity. However, security and privacy concerns generally deter enterprises from migrating their critical data to the cloud. One way to alleviate these concerns, hence bolster the adoption of cloud computing, is to devise adequate security policies that control the manner in which these data are stored and accessed in the cloud. Nevertheless, for enterprises to entrust these policies, a framework capable of providing assurances about their correctness is required. This work proposes such a framework. In particular, it proposes an approach that enables enterprises to define their own view of what constitutes a correct policy through the formulation of an appropriate set of well-formedness constraints. These constraints are expressed ontologically thus enabling–-by virtue of semantic inferencing–- automated reasoning about their satisfaction by the policies.
Some IoT data are time-sensitive and cannot be processed in clouds, which are too far away from IoT devices. Fog computing, located as close as possible to data sources at the edge of IoT systems, deals with this problem. Some IoT data are sensitive and require privacy controls. The proposed Policy Enforcement Fog Module (PEFM), running within a single fog, operates close to data sources connected to their fog, and enforces privacy policies for all sensitive IoT data generated by these data sources. PEFM distinguishes two kinds of fog data processing. First, fog nodes process data for local IoT applications, running within the local fog. All real-time data processing must be local to satisfy real-time constraints. Second, fog nodes disseminate data to nodes beyond the local fog (including remote fogs and clouds) for remote (and non-real-time) IoT applications. PEFM has two components for these two kinds of fog data processing. First, Local Policy Enforcement Module (LPEM), performs direct privacy policy enforcement for sensitive data accessed by local IoT applications. Second, Remote Policy Enforcement Module (RPEM), sets up a mechanism for indirectly enforcing privacy policies for sensitive data sent to remote IoT applications. RPEM is based on creating and disseminating Active Data Bundles-software constructs bundling inseparably sensitive data, their privacy policies, and an execution engine able to enforce privacy policies. To prove effectiveness and efficiency of the solution, we developed a proof-of-concept scenario for a smart home IoT application. We investigate privacy threats for sensitive IoT data and show a framework for using PEFM to overcome these threats.
Many organizations process personal information in the course of normal operations. Improper disclosure of this information can be damaging, so organizations must obey privacy laws and regulations that impose restrictions on its release or risk penalties. Since electronic management of personal information must be held in strict compliance with the law, software systems designed for such purposes must have some guarantee of compliance. To support this, we develop a general methodology for designing and implementing verifiable information systems. This paper develops the design of the History Aware Programming Language into a framework for creating systems that can be mechanically checked against privacy specifications. We apply this framework to create and verify a prototypical Electronic Medical Record System (EMRS) expressed as a set of actor components and first-order linear temporal logic specifications in assume-guarantee form. We then show that the implementation of the EMRS provably enforces a formalized Health Insurance Portability and Accountability Act (HIPAA) policy using a combination of model checking and static analysis techniques.
This paper argues that standard privacy policy principles are unsuitable for wearable devices, and introduces a proposal to test the role of digital literacy on privacy concerns and behaviors, in an effort to devise modified privacy policies that are appropriate for wearable devices.
Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.
Nowadays Big data is considered as one of the major technologies used to manage a huge number of data, but there is little consideration of privacy in big data platforms. Indeed, developers don't focus on implementing security best practices in their programs to protect personal and sensitive data, and organizations can face financial lost because of this noncompliance with applied regulations. In this paper, we propose a solution to insert privacy policies written in XACML (eXtensible Access Control Markup Language) in access control solution to NoSQL database, our solution can be used for NoSQL data store which doesn't t include many access control features, it aims basically to ensure fine grained access control considering purpose as the main parameter, we will focus on access control in document level, and apply this approach to MongoDB which is the most used NoSQL data store.
Android privacy control is an important but difficult problem to solve. Previously, there was much research effort either focusing on extending the Android permission model with better policies or modifying the Android framework for fine-grained access control. In this work, we take an integral approach by designing and implementing SweetDroid, a calling-context-sensitive privacy policy enforcement framework. SweetDroid combines automated policy generation with automated policy enforcement. The automatically generated policies in SweetDroid are based on the calling contexts of privacy sensitive APIs; hence, SweetDroid is able to tell whether a particular API (e.g., getLastKnownLocation) under a certain execution path is leaking private information. The policy enforcement in SweetDroid is also fine-grained - it is at the individual API level, not at the permission level. We implement and evaluate the system based on thousands of Android apps, including those from a third-party market and malicious apps from VirusTotal. Our experiment results show that SweetDroid can successfully distinguish and enforce different privacy policies based on calling contexts, and the current design is both developer hassle-free and user transparent. SweetDroid is also efficient because it only introduces small storage and computational overhead.
Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.
Wireless Sensor Networks (WSN) are widely used to monitor and control physical environments. An efficient energy management system is needed to be able to deploy these networks in lossy environments while maintaining reliable communication. The IPv6 Routing Protocol for Low-Power and Lossy networks is a routing protocol designed to properly manage energy without compromising reliability. This protocol has currently been implemented in Contiki OS, TinyOS, and OMNeT++ Castalia. But these applications also simulate all operation mechanics of a specified hardware model instead of just simulating the protocol only, thus adding unnecessary overhead and slowing down simulations on RPL. In light of this, we have implemented a working ns-3 implementation of RPL with support for multiple RPL instances with the use of a global repair mechanism. The behavior and output of our simulator was compared to Cooja for verification, and the results are similar with a minor difference in rank computation.
Many popular web applications incorporate end-toend secure messaging protocols, which seek to ensure that messages sent between users are kept confidential and authenticated, even if the web application's servers are broken into or otherwise compelled into releasing all their data. Protocols that promise such strong security guarantees should be held up to rigorous analysis, since protocol flaws and implementations bugs can easily lead to real-world attacks. We propose a novel methodology that allows protocol designers, implementers, and security analysts to collaboratively verify a protocol using automated tools. The protocol is implemented in ProScript, a new domain-specific language that is designed for writing cryptographic protocol code that can both be executed within JavaScript programs and automatically translated to a readable model in the applied pi calculus. This model can then be analyzed symbolically using ProVerif to find attacks in a variety of threat models. The model can also be used as the basis of a computational proof using CryptoVerif, which reduces the security of the protocol to standard cryptographic assumptions. If ProVerif finds an attack, or if the CryptoVerif proof reveals a weakness, the protocol designer modifies the ProScript protocol code and regenerates the model to enable a new analysis. We demonstrate our methodology by implementing and analyzing a variant of the popular Signal Protocol with only minor differences. We use ProVerif and CryptoVerif to find new and previously-known weaknesses in the protocol and suggest practical countermeasures. Our ProScript protocol code is incorporated within the current release of Cryptocat, a desktop secure messenger application written in JavaScript. Our results indicate that, with disciplined programming and some verification expertise, the systematic analysis of complex cryptographic web applications is now becoming practical.
Until recently, IT security received limited attention within the scope of Process Control Systems (PCS). In the past, PCS consisted of isolated, specialized components running closed process control applications, where hardware was placed in physically secured locations and connections to remote network infrastructures were forbidden. Nowadays, industrial communications are fully exploiting the plethora of features and novel capabilities deriving from the adoption of commodity off the shelf (COTS) hardware and software. Nonetheless, the reliance on COTS for remote monitoring, configuration and maintenance also exposed PCS to significant cyber threats. In light of these issues, this paper presents the steps for the design, verification and implementation of a lightweight remote attestation protocol. The protocol is aimed at providing a secure software integrity verification scheme that can be readily integrated into existing industrial applications. The main novelty of the designed protocol is that it encapsulates key elements for the protection of both participating parties (i.e., verifier and prover) against cyber attacks. The protocol is formally verified for correctness with the help of the Scyther model checking tool. The protocol implementation and experimental results are provided for a Phoenix-Contact industrial controller, which is widely used in the automation of gas transportation networks in Romania.
We propose a real time authentication scheme for smart grids which improves upon existing schemes. Our scheme is useful in many situations in smart grid operations. The smart grid Control Center (CC) communicates with the sensor nodes installed in the transmission lines so as to utilize real time data for monitoring environmental conditions in order to determine optimum power transmission capacity. Again a smart grid Operation Center (OC) communicates with several Residential Area (RA) gateways (GW) that are in turn connected to the smart meters installed in the consumer premises so as to dynamically control the power supply to meet demand based on real time electricity use information. It is not only necessary to authenticate sensor nodes and other smart devices, but also protect the integrity of messages being communicated. Our scheme is based on batch signatures and are more efficient than existing schemes. Furthermore our scheme is based on stronger notion of security, whereby the batch of signatures verify only if all individual signatures are valid. The communication overhead is kept low by using short signatures for verification.
Deploying Internet of Things (IoT) applications over wireless networks has become commonplace. The transmission of unencrypted data between IOT devices gives malicious users the opportunity to steal personal information. Despite resource-constrained in the IoT environment, devices need to apply authentication methods to encrypt information and control access rights. This paper introduces a trusted third-party method of identity verification and exchange of keys that minimizes the resources required for communication between devices. A device must be registered in order to obtain a certificate and a session key, for verified identity and encryption communication. Malicious users will not be able to obtain private information or to use it wrongly, as this would be protected by authentication and access control
We present Minesweeper, a tool to verify that a network satisfies a wide range of intended properties such as reachability or isolation among nodes, waypointing, black holes, bounded path length, load-balancing, functional equivalence of two routers, and fault-tolerance. Minesweeper translates network configuration files into a logical formula that captures the stable states to which the network forwarding will converge as a result of interactions between routing protocols such as OSPF, BGP and static routes. It then combines the formula with constraints that describe the intended property. If the combined formula is satisfiable, there exists a stable state of the network in which the property does not hold. Otherwise, no stable state (if any) violates the property. We used Minesweeper to check four properties of 152 real networks from a large cloud provider. We found 120 violations, some of which are potentially serious security vulnerabilities. We also evaluated Minesweeper on synthetic benchmarks, and found that it can verify rich properties for networks with hundreds of routers in under five minutes. This performance is due to a suite of model-slicing and hoisting optimizations that we developed, which reduce runtime by over 460x for large networks.
We consider the problem of verifying the security of finitely many sessions of a protocol that tosses coins in addition to standard cryptographic primitives against a Dolev-Yao adversary. Two properties are investigated here - secrecy, which asks if no adversary interacting with a protocol P can determine a secret sec with probability textgreater 1 - p; and indistinguishability, which asks if the probability observing any sequence 0$øverline$ in P1 is the same as that of observing 0$øverline$ in P2, under the same adversary. Both secrecy and indistinguishability are known to be coNP-complete for non-randomized protocols. In contrast, we show that, for randomized protocols, secrecy and indistinguishability are both decidable in coNEXPTIME. We also prove a matching lower bound for the secrecy problem by reducing the non-satisfiability problem of monadic first order logic without equality.
The IoT node works mostly in a specific scenario, and executes the fixed program. In order to make it suitable for more scenarios, this paper introduces a kind of the IoT node, which can change program at any time. And this node has intelligent and dynamic reconfigurable features. Then, a transport protocol is proposed. It enables this node to work in different scenarios and perform corresponding program. Finally, we use Verilog to design and FPGA to verify. The result shows that this protocol is feasible. It also offers a novel way of the IoT.
Population protocols are a well established model of computation by anonymous, identical finite state agents. A protocol is well-specified if from every initial configuration, all fair executions of the protocol reach a common consensus. The central verification question for population protocols is the well-specification problem: deciding if a given protocol is well-specified. Esparza et al. have recently shown that this problem is decidable, but with very high complexity: it is at least as hard as the Petri net reachability problem, which is EXPSPACE-hard, and for which only algorithms of non-primitive recursive complexity are currently known. In this paper we introduce the class WS3 of well-specified strongly-silent protocols and we prove that it is suitable for automatic verification. More precisely, we show that WS3 has the same computational power as general well-specified protocols, and captures standard protocols from the literature. Moreover, we show that the membership problem for WS3 reduces to solving boolean combinations of linear constraints over N. This allowed us to develop the first software able to automatically prove well-specification for all of the infinitely many possible inputs.
The proliferation of connected devices has motivated a surge in the development of cryptographic protocols to support a diversity of devices and use cases. To address this trend, we propose continuous verification, a methodology for secure cryptographic protocol design that consists of three principles: (1) repeated use of verification tools; (2) judicious use of common message components; and (3) inclusion of verifiable model specifications in standards. Our recommendations are derived from previous work in the formal methods community, as well as from our past experiences applying verification tools to improve standards. Through a case study of IETF protocols for the IoT, we illustrate the power of continuous verification by (i) discovering flaws in the protocols using the Cryptographic Protocol Shapes Analyzer (CPSA); (ii) identifying the corresponding fixes based on the feedback provided by CPSA; and (iii) demonstrating that verifiable models can be intuitive, concise and suitable for inclusion in standards to enable third-party verification and future modifications.
Educational games have a potentially significant role to play in the increasing efforts to expand access to computer science education. Computational thinking is an area of particular interest, including the development of problem-solving strategies like divide and conquer. Existing games designed to teach computational thinking generally consist of either open-ended exploration with little direct guidance or a linear series of puzzles with lots of direct guidance, but little exploration. Educational research indicates that the most effective approach may be a hybrid of these two structures. We present Dragon Architect, an educational computational thinking game, and use it as context for a discussion of key open problems in the design of games to teach computational thinking. These problems include how to directly teach computational thinking strategies, how to achieve a balance between exploration and direct guidance, and how to incorporate engaging social features. We also discuss several important design challenges we have encountered during the design of Dragon Architect. We contend the problems we describe are relevant to anyone making educational games or systems that need to teach complex concepts and skills.
System-level development has been dominated by traditional programming languages such as C and C++ for decades. These languages are inherently unsafe regarding memory management. Even experienced developers make mistakes that open up security holes or compromise the safety properties of software. The Rust programming language is targeted at the systems domain and aims to eliminate memory-related programming errors by enforcing a strict memory model at the language and compiler level. Unfortunately, these compile-time guarantees no longer hold when a Rust program is linked against a library written in unsafe C, which is commonly required for functionality where an implementation in Rust is not yet available. In this paper, we present Sandcrust, an easy-to-use sand-boxing solution for isolating code and data of a C library in a separate process. This isolation protects the Rust-based main program from any memory corruption caused by bugs in the unsafe library, which would otherwise invalidate the memory safety guarantees of Rust. Sandcrust is based on the Rust macro system and requires no modification to the compiler or runtime, but only straightforward annotation of functions that call the library's API.
Classifying users according to their behaviors is a complex problem due to the high-volume of data and the unclear association between distinct data points. Although over the past years behavioral researches has mainly focused on Massive Multiplayer Online Role Playing Games (MMORPG), such as World of Warcraft (WoW), which has predefined player classes, there has been little applied to Open World Sandbox Games (OWSG). Some OWSG do not have player classes or structured linear gameplay mechanics, as freedom is given to the player to freely wander and interact with the virtual world. This research focuses on identifying different play styles that exist within the non-structured gameplay sessions of OWSG. This paper uses the OWSG TUG as a case study and over a period of forty-five days, a database stored selected gameplay events happening on the research server. The study applied k-means clustering to this dataset and evaluated the resulting distinct behavioral profiles to classify player sessions on an open world sandbox game.