Biblio
In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.
Cloud computing denotes an IT infrastructure where data and software are stored and processed remotely in a data center of a cloud provider, which are accessible via an Internet service. This new paradigm is increasingly reaching the ears of companies and has revolutionized the marketplace of today owing to several factors, in particular its cost-effective architectures covering transmission, storage and intensive data computing. However, like any new technology, the cloud computing technology brings new problems of security, which represents the main restrain on turning to this paradigm. For this reason, users are reluctant to resort to the cloud because of security and protection of private data as well as lack of trust in cloud service providers. The work in this paper allows the readers to familiarize themselves with the field of security in the cloud computing paradigm while suggesting our contribution in this context. The security schema we propose allowing a distant user to ensure a completely secure migration of all their data anywhere in the cloud through DNA cryptography. Carried out experiments showed that our security solution outperforms its competitors in terms of integrity and confidentiality of data.
Due to its costly and time-consuming nature and a wide range of passive barrier elements and tools for their breaching, testing the delay time of passive barriers is only possible as an experimental tool to verify expert judgements of said delay times. The article focuses on the possibility of creating and utilizing a new method of acquiring values of delay time for various passive barrier elements using expert judgements which could add to the creation of charts where interactions between the used elements of mechanical barriers and the potential tools for their bypassing would be assigned a temporal value. The article consists of basic description of methods of expert judgements previously applied for making prognoses of socio-economic development and in other societal areas, which are called soft system. In terms of the problem of delay time, this method needed to be modified in such a way that the prospective output would be expressible by a specific quantitative value. To achieve this goal, each stage of the expert judgements was adjusted to the use of suitable scientific methods to select appropriate experts and then to achieve and process the expert data. High emphasis was placed on evaluation of quality and reliability of the expert judgements, which takes into account the specifics of expert selection such as their low numbers, specialization and practical experience.
With the frequent use of Wi-Fi and hotspots that provide a wireless Internet environment, awareness and threats to wireless AP (Access Point) security are steadily increasing. Especially when using unauthorized APs in company, government and military facilities, there is a high possibility of being subjected to various viruses and hacking attacks. It is necessary to detect unauthorized Aps for protection of information. In this paper, we use RTT (Round Trip Time) value data set to detect authorized and unauthorized APs in wired / wireless integrated environment, analyze them using machine learning algorithms including SVM (Support Vector Machine), C4.5, KNN (K Nearest Neighbors) and MLP (Multilayer Perceptron). Overall, KNN shows the highest accuracy.
Ransomware has become a very significant cyber threat. The basic idea of ransomware was presented in the form of a cryptovirus in 1995. However, it was considered as merely a conceptual topic since then for over a decade. In 2017, ransomware has become a reality, with several famous cases of ransomware having compromised important computer systems worldwide. For example, the damage caused by CryptoLocker and WannaCry is huge, as well as global. They encrypt victims' files and require user's payment to decrypt them. Because they utilize public key cryptography, the key for recovery cannot be found in the footprint of the ransomware on the victim's system. Therefore, once infected, the system cannot be recovered without paying for restoration. Various methods to deal this threat have been developed by antivirus researchers and experts in network security. However, it is believed that cryptographic defense is infeasible because recovering a victim's files is computationally as difficult as breaking a public key cryptosystem. Quite recently, various approaches to protect the crypto-API of an OS from malicious codes have been proposed. Most ransomware generate encryption keys using the random number generation service provided by the victim's OS. Thus, if a user can control all random numbers generated by the system, then he/she can recover the random numbers used by the ransomware for the encryption key. In this paper, we propose a dynamic ransomware protection method that replaces the random number generator of the OS with a user-defined generator. As the proposed method causes the virus program to generate keys based on the output from the user-defined generator, it is possible to recover an infected file system by reproducing the keys the attacker used to perform the encryption.
To build a resilient and secure microgrid in the face of growing cyber-attacks and cyber-mistakes, we present a software-defined networking (SDN)-based communication network architecture for microgrid operations. We leverage the global visibility, direct networking controllability, and programmability offered by SDN to investigate multiple security applications, including self-healing communication network management, real-time and uncertainty-aware communication network verification, and specification-based intrusion detection. We also expand a novel cyber-physical testing and evaluation platform that combines a power distribution system simulator (for microgrid energy services) and an SDN emulator with a distributed control environment (for microgrid communications). Experimental results demonstrate that the SDN-based communication architecture and applications can significantly enhance the resilience and security of microgrid operations against the realization of various cyber threats.
This paper introduces quarantine, a new security primitive for an operating system to use in order to protect information and isolate malicious behavior. Quarantine's core feature is the ability to fork a protection domain on-the-fly to isolate a specific principal's execution of untrusted code without risk of a compromise spreading. Forking enables the OS to ensure service continuity by permitting even high-risk operations to proceed, albeit subject to greater scrutiny and constraints. Quarantine even partitions executing threads that share resources into isolated protection domains. We discuss the design and implementation of quarantine within the LockDown OS, a security-focused evolution of the Composite component-based microkernel OS. Initial performance results for quarantine show that about 98% of the overhead comes from the cost of copying memory to the new protection domain.
NSTX used MDSplus extensively to record data, relay information and control data acquisition hardware. For NSTX-U the same functionality is expected as well as an expansion into the realm of securely maintaining parameters for machine protection. Specifically, we designed the Digital Coil Protection System (DCPS) to use MDSplus to manage our physical and electrical limit values and relay information about the state of our acquisition system to DCPS. Additionally, test and development systems need to use many of the same resources concurrently without causing interference with other critical systems. Further complications include providing access to critical, protected data without risking changes being made to it by unauthorized users or through unsupported or uncontrolled methods either maliciously or unintentionally. To achieve a level of confidence with an existing software system designed with minimal security controls, a number of changes to how MDSplus is used were designed and implemented. Trees would need to be verified and checked for changes before use. Concurrent creation of trees from vastly different use-cases and varying requirements would need to be supported. This paper will further discuss the impetus for developing such designs and the methods used to implement them.
A model of protection mechanisms in computing systems is presented and its appropriateness is argued. The “safety” problem for protection systems under this model is to determine in a given situation whether a subject can acquire a particular right to an object. In restricted cases, it can be shown that this problem is decidable, i.e. there is an algorithm to determine whether a system in a particular configuration is safe. In general, and under surprisingly weak assumptions, it cannot be decided if a situation is safe. Various implications of this fact are discussed.
This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.
The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.
This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.
This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.
The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.