Visible to the public Biblio

Filters: Keyword is Scientific Computing Security  [Clear All Filters]
2021-09-16
Wright, Marc, Chizari, Hassan, Viana, Thiago.  2020.  Analytical Framework for National Cyber-Security and Corresponding Critical Infrastructure: A Pragmatistic Approach. 2020 International Conference on Computational Science and Computational Intelligence (CSCI). :127–130.
Countries are putting cyber-security at the forefront of their national issues. With the increase in cyber capabilities and infrastructure systems becoming cyber-enabled, threats now have a physical impact from the cyber dimension. This paper proposes an analytical framework for national cyber-security profiling by taking national governmental and technical threat modeling simulations. Applying thematic analysis towards national cybersecurity strategy helps further develop understanding, in conjunction with threat modeling methodology simulation, to gain insight into critical infrastructure threat impact.
Du, Xin, Tang, Songtao, Lu, Zhihui, Wet, Jie, Gai, Keke, Hung, Patrick C.K..  2020.  A Novel Data Placement Strategy for Data-Sharing Scientific Workflows in Heterogeneous Edge-Cloud Computing Environments. 2020 IEEE International Conference on Web Services (ICWS). :498–507.
The deployment of datasets in the heterogeneous edge-cloud computing paradigm has received increasing attention in state-of-the-art research. However, due to their large sizes and the existence of private scientific datasets, finding an optimal data placement strategy that can minimize data transmission as well as improve performance, remains a persistent problem. In this study, the advantages of both edge and cloud computing are combined to construct a data placement model that works for multiple scientific workflows. Apparently, the most difficult research challenge is to provide a data placement strategy to consider shared datasets, both within individual and among multiple workflows, across various geographically distributed environments. According to the constructed model, not only the storage capacity of edge micro-datacenters, but also the data transfer between multiple clouds across regions must be considered. To address this issue, we considered the characteristics of this model and identified the factors that are causing the transmission delay. The authors propose using a discrete particle swarm optimization algorithm with differential evolution (DE-DPSO) to distribute dataset during workflow execution. Based on this, a new data placement strategy named DE-DPSO-DPS is proposed. DE-DPSO-DPS is evaluated using several experiments designed in simulated heterogeneous edge-cloud computing environments. The results demonstrate that our data placement strategy can effectively reduce the data transmission time and achieve superior performance as compared to traditional strategies for data-sharing scientific workflows.
Al-Jody, Taha, Holmes, Violeta, Antoniades, Alexandros, Kazkouzeh, Yazan.  2020.  Bearicade: Secure Access Gateway to High Performance Computing Systems. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1420–1427.
Cyber security is becoming a vital part of many information technologies and computing systems. Increasingly, High-Performance Computing systems are used in scientific research, academia and industry. High-Performance Computing applications are specifically designed to take advantage of the parallel nature of High-Performance Computing systems. Current research into High-Performance Computing systems focuses on the improvements in software development, parallel algorithms and computer systems architecture. However, there are no significant efforts in developing common High-Performance Computing security standards. Security of the High-Performance Computing resources is often an add-on to existing varied institutional policies that do not take into account additional requirements for High-Performance Computing security. Also, the users' terminals or portals used to access the High-Performance Computing resources are frequently insecure or they are being used in unprotected networks. In this paper we present Bearicade - a Data-driven Security Orchestration Automation and Response system. Bearicade collects data from the HPC systems and its users, enabling the use of Machine Learning based solutions to address current security issues in the High-Performance Computing systems. The system security is achieved through monitoring, analysis and interpretation of data such as users' activity, server requests, devices used and geographic locations. Any anomaly in users' behaviour is detected using machine learning algorithms, and would be visible to system administrators to help mediate the threats. The system was tested on a university campus grid system by administrators and users. Two case studies, Anomaly detection of user behaviour and Classification of Malicious Linux Terminal Command, have demonstrated machine learning approaches in identifying potential security threats. Bearicade's data was used in the experiments. The results demonstrated that detailed information is provided to the HPC administrators to detect possible security attacks and to act promptly.
2020-03-16
Iuhasz, Gabriel, Petcu, Dana.  2019.  Perspectives on Anomaly and Event Detection in Exascale Systems. 2019 IEEE 5th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :225–229.
The design and implementation of exascale system is nowadays an important challenge. Such a system is expected to combine HPC with Big Data methods and technologies to allow the execution of scientific workloads which are not tractable at this present time. In this paper we focus on an event and anomaly detection framework which is crucial in giving a global overview of a exascale system (which in turn is necessary for the successful implementation and exploitation of the system). We propose an architecture for such a framework and show how it can be used to handle failures during job execution.
Zebari, Dilovan Asaad, Haron, Habibollah, Zeebaree, Diyar Qader, Zain, Azlan Mohd.  2019.  A Simultaneous Approach for Compression and Encryption Techniques Using Deoxyribonucleic Acid. 2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA). :1–6.
The Data Compression is a creative skill which defined scientific concepts of providing contents in a compact form. Thus, it has turned into a need in the field of communication as well as in different scientific studies. Data transmission must be sufficiently secure to be utilized in a channel medium with no misfortune; and altering of information. Encryption is the way toward scrambling an information with the goal that just the known receiver can peruse or see it. Encryption can give methods for anchoring data. Along these lines, the two strategies are the two crucial advances that required for the protected transmission of huge measure of information. In typical cases, the compacted information is encoded and transmitted. In any case, this sequential technique is time consumption and computationally cost. In the present paper, an examination on simultaneous compression and encryption technique depends on DNA which is proposed for various sorts of secret data. In simultaneous technique, both techniques can be done at single step which lessens the time for the whole task. The present work is consisting of two phases. First phase, encodes the plaintext by 6-bits instead of 8-bits, means each character represented by three DNA nucleotides whereas to encode any pixel of image by four DNA nucleotides. This phase can compress the plaintext by 25% of the original text. Second phase, compression and encryption has been done at the same time. Both types of data have been compressed by their half size as well as encrypted the generated symmetric key. Thus, this technique is more secure against intruders. Experimental results show a better performance of the proposed scheme compared with standard compression techniques.
Karpenko, V.I., Vasilev, S.P., Boltunov, A.P., Voloshin, E.A., Voloshin, A. A..  2019.  Intelligent Consumers Device and Cybersecurity of Load Management in Microgrids. 2019 2nd International Youth Scientific and Technical Conference on Relay Protection and Automation (RPA). :1–10.
The digitalization of the electric power industry and the development of territories isolated from the unified energy system are priorities in the development of the energy sector. Thanks to innovative solutions and digital technologies, it becomes possible to make more effective managing and monitoring. Such solution is IoT platform with intelligent control system implemented by software.
Singh, Rina, Graves, Jeffrey A., Anantharaj, Valentine, Sukumar, Sreenivas R..  2019.  Evaluating Scientific Workflow Engines for Data and Compute Intensive Discoveries. 2019 IEEE International Conference on Big Data (Big Data). :4553–4560.
Workflow engines used to script scientific experiments involving numerical simulation, data analysis, instruments, edge sensors, and artificial intelligence have to deal with the complexities of hardware, software, resource availability, and the collaborative nature of science. In this paper, we survey workflow engines used in data-intensive and compute-intensive discovery pipelines from scientific disciplines such as astronomy, high energy physics, earth system science, bio-medicine, and material science and present a qualitative analysis of their respective capabilities. We compare 5 popular workflow engines and their differentiated approach to job orchestration, job launching, data management and provenance, security authentication, ease-ofuse, workflow description, and scripting semantics. The comparisons presented in this paper allow practitioners to choose the appropriate engine for their scientific experiment and lead to recommendations for future work.
Ablaev, Farid, Andrianov, Sergey, Soloviev, Aleksey.  2019.  Quantum Electronic Generator of Random Numbers for Information Security in Automatic Control Systems. 2019 International Russian Automation Conference (RusAutoCon). :1–5.

The problems of random numbers application to the information security of data, communication lines, computer units and automated driving systems are considered. The possibilities for making up quantum generators of random numbers and existing solutions for acquiring of sufficiently random sequences are analyzed. The authors found out the method for the creation of quantum generators on the basis of semiconductor electronic components. The electron-quantum generator based on electrons tunneling is experimentally demonstrated. It is shown that it is able to create random sequences of high security level and satisfying known NIST statistical tests (P-Value\textbackslashtextgreater0.9). The generator created can be used for formation of both closed and open cryptographic keys in computer systems and other platforms and has great potential for realization of random walks and probabilistic computing on the basis of neural nets and other IT problems.

Molyakov, Andrey.  2019.  New security descriptor computing algorithm of Supercomputers. 2019 Third World Conference on Smart Trends in Systems Security and Sustainablity (WorldS4). :349–350.
The author describes computing algorithm based on new scientific definition - “The resulting convolution, which takes into account changes in the significant bits of variables of the Zhegalkin polynomial, is a superposition of hash function calculations for the i-th process”.
Udod, Kyryll, Kushnarenko, Volodymyr, Wesner, Stefan, Svjatnyj, Volodymyr.  2019.  Preservation System for Scientific Experiments in High Performance Computing: Challenges and Proposed Concept. 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). 2:809–813.
Continuously growing amount of research experiments using High Performance Computing (HPC) leads to the questions of research data management and in particular how to preserve a scientific experiment including all related data for long term for its future reproduction. This paper covers some challenges and possible solutions related to the preservation of scientific experiments on HPC systems and represents a concept of the preservation system for HPC computations. Storage of the experiment itself with some related data is not only enough for its future reproduction, especially in the long term. For that case preservation of the whole experiment's environment (operating system, used libraries, environment variables, input data, etc.) via containerization technology (e.g. using Docker, Singularity) is proposed. This approach allows to preserve the entire environment, but is not always possible on every HPC system because of security issues. And it also leaves a question, how to deal with commercial software that was used within the experiment. As a possible solution we propose to run a preservation process outside of the computing system on the web-server and to replace all commercial software inside the created experiment's image with open source analogues that should allow future reproduction of the experiment without any legal issues. The prototype of such a system was developed, the paper provides the scheme of the system, its main features and describes the first experimental results and further research steps.
Zhou, Yaqiu, Ren, Yongmao, Zhou, Xu, Yang, Wanghong, Qin, Yifang.  2019.  A Scientific Data Traffic Scheduling Algorithm Based on Software-Defined Networking. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :62–67.
Compared to ordinary Internet applications, the transfer of scientific data flows often has higher requirements for network performance. The network security devices and systems often affect the efficiency of scientific data transfer. As a new type of network architecture, Software-defined Networking (SDN) decouples the data plane from the control plane. Its programmability allows users to customize the network transfer path and makes the network more intelligent. The Science DMZ model is a private network for scientific data flow transfer, which can improve performance under the premise of ensuring network security. This paper combines SDN with Science DMZ, designs and implements an SDN-based traffic scheduling algorithm considering the load of link. In addition to distinguishing scientific data flow from common data flow, the algorithm further distinguishes the scientific data flows of different applications and performs different traffic scheduling of scientific data for specific link states. Experiments results proved that the algorithm can effectively improve the transmission performance of scientific data flow.
2020-01-02
Shabanov, Boris, Sotnikov, Alexander, Palyukh, Boris, Vetrov, Alexander, Alexandrova, Darya.  2019.  Expert System for Managing Policy of Technological Security in Uncertainty Conditions: Architectural, Algorithmic, and Computing Aspects. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1716–1721.

The paper discusses the architectural, algorithmic and computing aspects of creating and operating a class of expert system for managing technological safety of an enterprise, in conditions of a large flow of diagnostic variables. The algorithm for finding a faulty technological chain uses expert information, formed as a set of evidence on the influence of diagnostic variables on the correctness of the technological process. Using the Dempster-Schafer trust function allows determining the overall probability measure on subsets of faulty process chains. To combine different evidence, the orthogonal sums of the base probabilities determined for each evidence are calculated. The procedure described above is converted into the rules of the knowledge base production. The description of the developed prototype of the expert system, its architecture, algorithmic and software is given. The functionality of the expert system and configuration tools for a specific type of production are under discussion.

2019-10-28
Kanewala, Thejaka Amila, Zalewski, Marcin, Lumsdaine, Andrew.  2018.  Distributed, Shared-Memory Parallel Triangle Counting. Proceedings of the Platform for Advanced Scientific Computing Conference. :5:1–5:12.
Triangles are the most basic non-trivial subgraphs. Triangle counting is used in a number of different applications, including social network mining, cyber security, and spam detection. In general, triangle counting algorithms are readily parallelizable, but when implemented in distributed, shared-memory, their performance is poor due to high communication, imbalance of work, and the difficulty of exploiting locality available in shared memory. In this paper, we discuss four different (but related) triangle counting algorithms and how their performance can be improved in distributed, shared-memory by reducing in-node load imbalance, improving cache utilization, minimizing network overhead, and minimizing algorithmic work. We generalize the four different triangle counting algorithms into a common framework and show that for all four algorithms the in-node load imbalance can be minimized while utilizing caches by partitioning work into blocks of vertices, the network overhead can be minimized by aggregation of blocks of work, and algorithm work can be reduced by partitioning vertex neighbors by degree. We experimentally evaluate the weak and the strong scaling performance of the proposed algorithms with two types of synthetic graph inputs and three real-world graph inputs. We also compare the performance of our implementations with the distributed, shared-memory triangle counting algorithms available in PowerGraph-GraphLab and show that our proposed algorithms outperform those algorithms, both in terms of space and time.
Zhai, Keke, Banerjee, Tania, Zwick, David, Hackl, Jason, Ranka, Sanjay.  2018.  Dynamic Load Balancing for Compressible Multiphase Turbulence. Proceedings of the 2018 International Conference on Supercomputing. :318–327.
CMT-nek is a new scientific application for performing high fidelity predictive simulations of particle laden explosively dispersed turbulent flows. CMT-nek involves detailed simulations, is compute intensive and is targeted to be deployed on exascale platforms. The moving particles are the main source of load imbalance as the application is executed on parallel processors. In a demonstration problem, all the particles are initially in a closed container until a detonation occurs and the particles move apart. If all processors get an equal share of the fluid domain, then only some of the processors get sections of the domain that are initially laden with particles, leading to disparate load on the processors. In order to eliminate load imbalance in different processors and to speedup the makespan, we present different load balancing algorithms for CMT-nek on large scale multicore platforms consisting of hundred of thousands of cores. The detailed process of the load balancing algorithms are presented. The performance of the different load balancing algorithms are compared and the associated overheads are analyzed. Evaluations on the application with and without load balancing are conducted and these show that with load balancing, simulation time becomes faster by a factor of up to 9.97.
Withers, Alex, Bockelman, Brian, Weitzel, Derek, Brown, Duncan, Gaynor, Jeff, Basney, Jim, Tannenbaum, Todd, Miller, Zach.  2018.  SciTokens: Capability-Based Secure Access to Remote Scientific Data. Proceedings of the Practice and Experience on Advanced Research Computing. :24:1–24:8.
The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.
Blanquer, Ignacio, Meira, Wagner.  2018.  EUBra-BIGSEA, A Cloud-Centric Big Data Scientific Research Platform. 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :47–48.
This paper describes the achievements of project EUBra-BIGSEA, which has delivered programming models and data analytics tools for the development of distributed Big Data applications. As framework components, multiple data models are supported (e.g. data streams, multidimensional data, etc.) and efficient mechanisms to ensure privacy and security, on top of a QoS-aware layer for the smart and rapid provisioning of resources in a cloud-based environment.
Ocaña, Kary, Galheigo, Marcelo, Osthoff, Carla, Gadelha, Luiz, Gomes, Antônio Tadeu A., De Oliveira, Daniel, Porto, Fabio, Vasconcelos, Ana Tereza.  2019.  Towards a Science Gateway for Bioinformatics: Experiences in the Brazilian System of High Performance Computing. 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :638–647.

Science gateways bring out the possibility of reproducible science as they are integrated into reusable techniques, data and workflow management systems, security mechanisms, and high performance computing (HPC). We introduce BioinfoPortal, a science gateway that integrates a suite of different bioinformatics applications using HPC and data management resources provided by the Brazilian National HPC System (SINAPAD). BioinfoPortal follows the Software as a Service (SaaS) model and the web server is freely available for academic use. The goal of this paper is to describe the science gateway and its usage, addressing challenges of designing a multiuser computational platform for parallel/distributed executions of large-scale bioinformatics applications using the Brazilian HPC resources. We also present a study of performance and scalability of some bioinformatics applications executed in the HPC environments and perform machine learning analyses for predicting features for the HPC allocation/usage that could better perform the bioinformatics applications via BioinfoPortal.

Huang, Jingwei.  2018.  From Big Data to Knowledge: Issues of Provenance, Trust, and Scientific Computing Integrity. 2018 IEEE International Conference on Big Data (Big Data). :2197–2205.
This paper addresses the nature of data and knowledge, the relation between them, the variety of views as a characteristic of Big Data regarding that data may come from many different sources/views from different viewpoints, and the associated essential issues of data provenance, knowledge provenance, scientific computing integrity, and trust in the data science process. Towards the direction of data-intensive science and engineering, it is of paramount importance to ensure Scientific Computing Integrity (SCI). A failure of SCI may be caused by malicious attacks, natural environmental changes, faults of scientists, operations mistakes, faults of supporting systems, faults of processes, and errors in the data or theories on which a research relies. The complexity of scientific workflows and large provenance graphs as well as various causes for SCI failures make ensuring SCI extremely difficult. Provenance and trust play critical role in evaluating SCI. This paper reports our progress in building a model for provenance-based trust reasoning about SCI.
Trunov, Artem S., Voronova, Lilia I., Voronov, Vyacheslav I., Ayrapetov, Dmitriy P..  2018.  Container Cluster Model Development for Legacy Applications Integration in Scientific Software System. 2018 IEEE International Conference "Quality Management, Transport and Information Security, Information Technologies" (IT QM IS). :815–819.
Feature of modern scientific information systems is their integration with computing applications, providing distributed computer simulation and intellectual processing of Big Data using high-efficiency computing. Often these software systems include legacy applications in different programming languages, with non-standardized interfaces. To solve the problem of applications integration, containerization systems are using that allow to configure environment in the shortest time to deploy software system. However, there are no such systems for computer simulation systems with large number of nodes. The article considers the actual task of combining containers into a cluster, integrating legacy applications to manage the distributed software system MD-SLAG-MELT v.14, which supports high-performance computing and visualization of the computer experiments results. Testing results of the container cluster including automatic load sharing module for MD-SLAG-MELT system v.14. are given.
2019-08-12
Karande, Vishal, Chandra, Swarup, Lin, Zhiqiang, Caballero, Juan, Khan, Latifur, Hamlen, Kevin.  2018.  BCD: Decomposing Binary Code Into Components Using Graph-Based Clustering. Proceedings of the 2018 on Asia Conference on Computer and Communications Security. :393-398.

Complex software is built by composing components implementing largely independent blocks of functionality. However, once the sources are compiled into an executable, that modularity is lost. This is unfortunate for code recipients, for whom knowing the components has many potential benefits, such as improved program understanding for reverse-engineering, identifying shared code across different programs, binary code reuse, and authorship attribution. A novel approach for decomposing such source-free program executables into components is here proposed. Given an executable, the approach first statically builds a decomposition graph, where nodes are functions and edges capture three types of relationships: code locality, data references, and function calls. It then applies a graph-theoretic approach to partition the functions into disjoint components. A prototype implementation, BCD, demonstrates the approach's efficacy: Evaluation of BCD with 25 C++ binary programs to recover the methods belonging to each class achieves high precision and recall scores for these tested programs.

2019-06-10
Mpanti, Anna, Nikolopoulos, Stavros D., Polenakis, Iosif.  2018.  A Graph-Based Model for Malicious Software Detection Exploiting Domination Relations Between System-Call Groups. Proceedings of the 19th International Conference on Computer Systems and Technologies. :20-26.

In this paper, we propose a graph-based algorithmic technique for malware detection, utilizing the System-call Dependency Graphs (ScDG) obtained through taint analysis traces. We leverage the grouping of system-calls into system-call groups with respect to their functionality to merge disjoint vertices of ScDG graphs, transforming them to Group Relation Graphs (GrG); note that, the GrG graphs represent malware's behavior being hence more resilient to probable mutations of its structure. More precisely, we extend the use of GrG graphs by mapping their vertices on the plane utilizing the degrees and the vertex-weights of a specific underlying graph of the GrG graph as to compute domination relations. Furthermore, we investigate how the activity of each system-call group could be utilized in order to distinguish graph-representations of malware and benign software. The domination relations among the vertices of GrG graphs result to a new graph representation that we call Coverage Graph of the GrG graph. Finally, we evaluate the potentials of our detection model using graph similarity between Coverage Graphs of known malicious and benign software samples of various types.

2017-12-04
Won, J., Singla, A., Bertino, E..  2017.  CertificateLess Cryptography-Based Rule Management Protocol for Advanced Mission Delivery Networks. 2017 IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW). :7–12.

Assured Mission Delivery Network (AMDN) is a collaborative network to support data-intensive scientific collaborations in a multi-cloud environment. Each scientific collaboration group, called a mission, specifies a set of rules to handle computing and network resources. Security is an integral part of the AMDN design since the rules must be set by authorized users and the data generated by each mission may be privacy-sensitive. In this paper, we propose a CertificateLess cryptography-based Rule-management Protocol (CL-RP) for AMDN, which supports authenticated rule registrations and updates with non-repudiation. We evaluate CL-RP through test-bed experiments and compare it with other standard protocols.

Sattar, N. S., Adnan, M. A., Kali, M. B..  2017.  Secured aerial photography using Homomorphic Encryption. 2017 International Conference on Networking, Systems and Security (NSysS). :107–114.

Aerial photography is fast becoming essential in scientific research that requires multi-agent system in several perspective and we proposed a secured system using one of the well-known public key cryptosystem namely NTRU that is somewhat homomorphic in nature. Here we processed images of aerial photography that were captured by multi-agents. The agents encrypt the images and upload those in the cloud server that is untrusted. Cloud computing is a buzzword in modern era and public cloud is being used by people everywhere for its shared, on-demand nature. Cloud Environment faces a lot of security and privacy issues that needs to be solved. This paper focuses on how to use cloud so effectively that there remains no possibility of data or computation breaches from the cloud server itself as it is prone to the attack of treachery in different ways. The cloud server computes on the encrypted data without knowing the contents of the images. After concatenation, encrypted result is delivered to the concerned authority where it is decrypted retaining its originality. We set up our experiment in Amazon EC2 cloud server where several instances were the agents and an instance acted as the server. We varied several parameters so that we could minimize encryption time. After experimentation we produced our desired result within feasible time sustaining the image quality. This work ensures data security in public cloud that was our main concern.

Thayananthan, V., Abdulkader, O., Jambi, K., Bamahdi, A. M..  2017.  Analysis of Cybersecurity Based on Li-Fi in Green Data Storage Environments. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :327–332.

Industrial networking has many issues based on the type of industries, data storage, data centers, and cloud computing, etc. Green data storage improves the scientific, commercial and industrial profile of the networking. Future industries are looking for cybersecurity solution with the low-cost resources in which the energy serving is the main problem in the industrial networking. To improve these problems, green data storage will be the priority because data centers and cloud computing deals with the data storage. In this analysis, we have decided to use solar energy source and different light rays as methodologies include a prism and the Li-Fi techniques. In this approach, light rays sent through the prism which allows us to transmit the data with different frequencies. This approach provides green energy and maximum protection within the data center. As a result, we have illustrated that cloud services within the green data center in industrial networking will achieve better protection with the low-cost energy through this analysis. Finally, we have to conclude that Li-Fi enhances the use of green energy and protection which are advantages to current and future industrial networking.

Al-Shomrani, A., Fathy, F., Jambi, K..  2017.  Policy enforcement for big data security. 2017 2nd International Conference on Anti-Cyber Crimes (ICACC). :70–74.

Security and privacy of big data becomes challenging as data grows and more accessible by more and more clients. Large-scale data storage is becoming a necessity for healthcare, business segments, government departments, scientific endeavors and individuals. Our research will focus on the privacy, security and how we can make sure that big data is secured. Managing security policy is a challenge that our framework will handle for big data. Privacy policy needs to be integrated, flexible, context-aware and customizable. We will build a framework to receive data from customer and then analyze data received, extract privacy policy and then identify the sensitive data. In this paper we will present the techniques for privacy policy which will be created to be used in our framework.