Biblio
The battlefield environment differs from the natural environment in terms of irregular communications and the possibility of destroying communication and medical units by enemy forces. Information that can be collected in a war environment by soldiers is important information and must reach top-level commanders in time for timely decisions making. Also, ambulance staff in the battlefield need to enter the data of injured soldiers after the first aid, so that the information is available for the field hospital staff to prepare the needs for incoming injured soldiers.In this research, we propose two transaction techniques to handle these issues and use different concurrency control protocols, depending on the nature of the transaction and not a one concurrency control protocol for all types of transactions. Message transaction technique is used to collect valuable data from the battlefield by soldiers and allows top-level commanders to view it according to their permissions by logging into the system, to help them make timely decisions. In addition, use the capabilities of DBMS tools to organize data and generate reports, as well as for future analysis. Medical service unit transactional workflow technique is used to provides medical information to the medical authorities about the injured soldiers and their status, which helps them to prepare the required needs before the wounded soldiers arrive at the hospitals. Both techniques handle the disconnection problem during transaction processing.In our approach, the transaction consists of four phases, reading, editing, validation, and writing phases, and its processing is based on the optimistic concurrency control protocol, and the rules of actionability that describe how a transaction behaves if a value-change is occurred on one or more of its attributes during its processing time by other transactions.
This paper introduces the first state-based formalization of isolation guarantees. Our approach is premised on a simple observation: applications view storage systems as black-boxes that transition through a series of states, a subset of which are observed by applications. Defining isolation guarantees in terms of these states frees definitions from implementation-specific assumptions. It makes immediately clear what anomalies, if any, applications can expect to observe, thus bridging the gap that exists today between how isolation guarantees are defined and how they are perceived. The clarity that results from definitions based on client-observable states brings forth several benefits. First, it allows us to easily compare the guarantees of distinct, but semantically close, isolation guarantees. We find that several well-known guarantees, previously thought to be distinct, are in fact equivalent, and that many previously incomparable flavors of snapshot isolation can be organized in a clean hierarchy. Second, freeing definitions from implementation-specific artefacts can suggest more efficient implementations of the same isolation guarantee. We show how a client-centric implementation of parallel snapshot isolation can be more resilient to slowdown cascades, a common phenomenon in large-scale datacenters.
We will focused the concept of serializability in order to ensure the correct processing of transactions. However, both serializability and relevant properties within transaction-based applications might be affected. Ensure transaction serialization in corrupt systems is one of the demands that can handle properly interrelated transactions, which prevents blocking situations that involve the inability to commit either transaction or related sub-transactions. In addition some transactions has been marked as malicious and they compromise the serialization of running system. In such context, this paper proposes an approach for the processing of transactions in a cloud of databases environment able to secure serializability in running transactions whether the system is compromised or not. We propose also an intrusion tolerant scheme to ensure the continuity of the running transactions. A case study and a simulation result are shown to illustrate the capabilities of the suggested system.
The serializability of transactions is the most important property that ensure correct processing to transactions. In case of concurrent access to the same data by several transactions, or in case of dependency relationships between running sub transactions. But some transactions has been marked as malicious and they compromise the serialization of running system. For that purpose, we propose an intrusion tolerant scheme to ensure the continuity of the running transactions. A transaction dependency graph is also used by the CDC to make decisions concerning the set of data and transactions that are threatened by a malicious activity. We will give explanations about how to use the proposed scheme to illustrate its behavior and efficiency against a compromised transaction-based in a cloud of databases environment. Several issues should be considered when dealing with the processing of a set of interleaved transactions in a transaction based environment. In most cases, these issues are due to the concurrent access to the same data by several transactions or the dependency relationship between running transactions. The serializability may be affected if a transaction that belongs to the processing node is compromised.
Over transactional database systems MultiVersion concurrency control is maintained for secure, fast and efficient access to the shared data file implementation scenario. An effective coordination is supposed to be set up between owners and users also the developers & system operators, to maintain inter-cloud & intra-cloud communication Most of the services & application offered in cloud world are real-time, which entails optimized compatibility service environment between master and slave clusters. In the paper, offered methodology supports replication and triggering methods intended for data consistency and dynamicity. Where intercommunication between different clusters is processed through middleware besides slave intra-communication is handled by verification & identification protection. The proposed approach incorporates resistive flow to handle high impact systems that identifies and verifies multiple processes. Results show that the new scheme reduces the overheads from different master and slave servers as they are co-located in clusters which allow increased horizontal and vertical scalability of resources.
Modern applications often operate on data in multiple administrative domains. In this federated setting, participants may not fully trust each other. These distributed applications use transactions as a core mechanism for ensuring reliability and consistency with persistent data. However, the coordination mechanisms needed for transactions can both leak confidential information and allow unauthorized influence. By implementing a simple attack, we show these side channels can be exploited. However, our focus is on preventing such attacks. We explore secure scheduling of atomic, serializable transactions in a federated setting. While we prove that no protocol can guarantee security and liveness in all settings, we establish conditions for sets of transactions that can safely complete under secure scheduling. Based on these conditions, we introduce \textbackslashti\staged commit\, a secure scheduling protocol for federated transactions. This protocol avoids insecure information channels by dividing transactions into distinct stages. We implement a compiler that statically checks code to ensure it meets our conditions, and a system that schedules these transactions using the staged commit protocol. Experiments on this implementation demonstrate that realistic federated transactions can be scheduled securely, atomically, and efficiently.