Biblio
Event logs that originate from information systems enable comprehensive analysis of business processes, e.g., by process model discovery. However, logs potentially contain sensitive information about individual employees involved in process execution that are only partially hidden by an obfuscation of the event data. In this paper, we therefore address the risk of privacy-disclosure attacks on event logs with pseudonymized employee information. To this end, we introduce PRETSA, a novel algorithm for event log sanitization that provides privacy guarantees in terms of k-anonymity and t-closeness. It thereby avoids disclosure of employee identities, their membership in the event log, and their characterization based on sensitive attributes, such as performance information. Through step-wise transformations of a prefix-tree representation of an event log, we maintain its high utility for discovery of a performance-annotated process model. Experiments with real-world data demonstrate that sanitization with PRETSA yields event logs of higher utility compared to methods that exploit frequency-based filtering, while providing the same privacy guarantees.
The next generation military environment requires a delay-tolerant network for sharing data and resources using an interoperable computerized, Command, Control, Communications, Intelligence, Surveillance and Reconnaissance (C4ISR) infrastructure. In this paper, we propose a new distributed SDN (Software-Defined Networks) architecture for tactical environments based on distributed cloudlets. The objective is to reduce the end-to-end delay of tactical traffic flow, and improve management capabilities, allowing flexible control and network resource allocation. The proposed SDN architecture is implemented over three layers: decentralized cloudlets layer where each cloudlet has its SDRN (Software-Defined Radio Networking) controller, decentralized MEC (Mobile Edge Computing) layer with an SDN controller for each MEC, and a centralized private cloud as a trusted third-part authority controlled by a centralized SDN controller. The experimental validations are done via relevant and realistic tactical scenarios based on strategic traffics loads, i.e., Tactical SMS (Short Message Service), UVs (Unmanned Vehicle) patrol deployment and high bite rate ISR (Intelligence, Surveillance, and Reconnaissance) video.
Modern vehicles in Intelligent Transportation Systems (ITS) can communicate with each other as well as roadside infrastructure units (RSUs) in order to increase transportation efficiency and road safety. For example, there are techniques to alert drivers in advance about traffic incidents and to help them avoid congestion. Threats to these systems, on the other hand, can limit the benefits of these technologies. Securing ITS itself is an important concern in ITS design and implementation. In this paper, we provide a security model of ITS which extends the classic layered network security model with transportation security and information security, and gives a reference for designing ITS architectures. Based on this security model, we also present a classification of ITS threats for defense. Finally a proof-of-concept example with malicious nodes in an ITS system is also given to demonstrate the impact of attacks. We analyzed the threat of malicious nodes and their effects to commuters, like increasing toll fees, travel distances, and travel times etc. Experimental results from simulations based on Veins shows the threats will bring about 43.40% more total toll fees, 39.45% longer travel distances, and 63.10% more travel times.
Since cyber-physical systems are inherently vulnerable to information leaks, software architects need to reason about security policies to define desired and undesired information flow through a system. The microservice architectural style requires the architects to refine a macro-level security policy into micro-level policies for individual microservices. However, when policies are refined in an ill-formed way, information leaks can emerge on composition of microservices. Related approaches to prevent such leaks do not take into account characteristics of cyber-physical systems like real-time behavior or message passing communication. In this paper, we enable the refinement and verification of information-flow security policies for cyber-physical microservice architectures. We provide architects with a set of well-formedness rules for refining a macro-level policy in a way that enforces its security restrictions. Based on the resulting micro-level policies, we present a verification technique to check if the real-time message passing of microservices is secure. In combination, our contributions prevent information leaks from emerging on composition. We evaluate the accuracy of our approach using an extension of the CoCoME case study.
The method of assessment of degree of compliance of divisions of the complex distributed corporate information system to a number of information security indicators is offered. As a result of the methodology implementation a comparative assessment of compliance level of each of the divisions for the corporate information security policy requirements may be given. This assessment may be used for the purpose of further decision-making by the management of the corporation on measures to minimize risks as a result of possible implementation of threats to information security.
Code reuse attacks can bypass the DEP mechanism effectively. Meanwhile, because of the stealthy of the operation, it becomes one of the most intractable threats while securing the information system. Although the security solutions of code randomization and diversity can mitigate the threat at a certain extent, attackers can bypass these solutions due to the high cost and coarsely granularity, and the memory disclosure vulnerability is another magic weapon which can be used by attackers to bypass these solutions. After analyzing the principle of memory disclosure vulnerability, we propose a novel code pointer hiding method based on a resilient area. We expatiate how to create the resilient area and achieve code pointer hiding from four aspects, namely hiding return addresses in data pages, hiding function pointers in data pages, hiding target pointers of instruction JUMP in code pages, and hiding target pointers of instruction CALL in code pages. This method can stop attackers from reading and analyzing pages in memory, which is a critical stage in finding and creating ROP chains while executing a code reuse attack. Lastly, we test the method contrastively, and the results show that the method is feasible and effective while defending against ROP attacks.
Nowadays, communication networks have a high relevance in any field. Because of this, it is necessary to maintain them working properly and with an adequate security level. In many fields, and in anomaly detection in communication networks in particular, it results really convenient the use of early detection methods. Therefore, adequate metrics must be defined to allow the correct evaluation of methods applied in relation to time delay in the detection. In this thesis the definition of time-aware metrics for early detection anomaly techniques evaluation.
This article will consider the probability test of Solovey-Strassen, to determine the simplicity of the number and its possible modifications. This test allows for the shortest possible time to determine whether the number is prime or not. C\# programming language was used to implement the algorithm in practice.
Development of information systems dealing with education and labour market using web and grid service architecture enables their modularity, expandability and interoperability. Application of ontologies to the web helps with collecting and selecting the knowledge about a certain field in a generic way, thus enabling different applications to understand, use, reuse and share the knowledge among them. A necessary step before publishing computer-interpretable data on the public web is the implementation of common standards that will ensure the exchange of information. Croatian Qualification Framework (CROQF) is a project of standardization of occupations for the labour market, as well as standardization of sets of qualifications, skills and competences and their mutual relations. This paper analysis a respectable amount of research dealing with application of ontologies to information systems in education during the last decade. The main goal is to compare achieved results according to: 1) phases of development/classifications of education-related ontologies; 2) areas of education and 3) standards and structures of metadata for educational systems. Collected information is used to provide insight into building blocks of CROQF, both the ones well supported by experience and best practices, and the ones that are not, together with guidelines for development of own standards using ontological structures.
With the advent of the big data era, information systems have exhibited some new features, including boundary obfuscation, system virtualization, unstructured and diversification of data types, and low coupling among function and data. These features not only lead to a big difference between big data technology (DT) and information technology (IT), but also promote the upgrading and evolution of network security technology. In response to these changes, in this paper we compare the characteristics between IT era and DT era, and then propose four DT security principles: privacy, integrity, traceability, and controllability, as well as active and dynamic defense strategy based on "propagation prediction, audit prediction, dynamic management and control". We further discuss the security challenges faced by DT and the corresponding assurance strategies. On this basis, the big data security technologies can be divided into four levels: elimination, continuation, improvement, and innovation. These technologies are analyzed, combed and explained according to six categories: access control, identification and authentication, data encryption, data privacy, intrusion prevention, security audit and disaster recovery. The results will support the evolution of security technologies in the DT era, the construction of big data platforms, the designation of security assurance strategies, and security technology choices suitable for big data.